Oct 08 14:22:53 crc systemd[1]: Starting Kubernetes Kubelet... Oct 08 14:22:53 crc restorecon[4568]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:53 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 14:22:54 crc restorecon[4568]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 14:22:54 crc restorecon[4568]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 08 14:22:55 crc kubenswrapper[4624]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 08 14:22:55 crc kubenswrapper[4624]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 08 14:22:55 crc kubenswrapper[4624]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 08 14:22:55 crc kubenswrapper[4624]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 08 14:22:55 crc kubenswrapper[4624]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 08 14:22:55 crc kubenswrapper[4624]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.256861 4624 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.261965 4624 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.261984 4624 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.261989 4624 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.261993 4624 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.261996 4624 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262001 4624 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262005 4624 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262009 4624 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262014 4624 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262021 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262025 4624 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262029 4624 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262033 4624 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262036 4624 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262040 4624 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262044 4624 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262047 4624 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262051 4624 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262055 4624 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262058 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262062 4624 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262065 4624 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262068 4624 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262073 4624 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262078 4624 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262082 4624 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262086 4624 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262090 4624 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262093 4624 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262097 4624 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262100 4624 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262113 4624 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262117 4624 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262121 4624 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262125 4624 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262129 4624 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262133 4624 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262136 4624 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262140 4624 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262143 4624 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262146 4624 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262150 4624 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262155 4624 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262160 4624 feature_gate.go:330] unrecognized feature gate: Example Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262163 4624 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262167 4624 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262170 4624 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262174 4624 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262177 4624 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262181 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262185 4624 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262188 4624 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262192 4624 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262195 4624 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262198 4624 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262202 4624 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262205 4624 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262209 4624 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262212 4624 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262215 4624 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262219 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262222 4624 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262225 4624 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262229 4624 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262232 4624 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262235 4624 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262239 4624 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262244 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262247 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262251 4624 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.262254 4624 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262330 4624 flags.go:64] FLAG: --address="0.0.0.0" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262339 4624 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262347 4624 flags.go:64] FLAG: --anonymous-auth="true" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262352 4624 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262357 4624 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262361 4624 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262367 4624 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262372 4624 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262376 4624 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262380 4624 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262385 4624 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262389 4624 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262394 4624 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262398 4624 flags.go:64] FLAG: --cgroup-root="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262402 4624 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262406 4624 flags.go:64] FLAG: --client-ca-file="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262410 4624 flags.go:64] FLAG: --cloud-config="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262414 4624 flags.go:64] FLAG: --cloud-provider="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262418 4624 flags.go:64] FLAG: --cluster-dns="[]" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262423 4624 flags.go:64] FLAG: --cluster-domain="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262427 4624 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262431 4624 flags.go:64] FLAG: --config-dir="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262435 4624 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262440 4624 flags.go:64] FLAG: --container-log-max-files="5" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262449 4624 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262453 4624 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262457 4624 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262461 4624 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262466 4624 flags.go:64] FLAG: --contention-profiling="false" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262470 4624 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262474 4624 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262479 4624 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262483 4624 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262488 4624 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262492 4624 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262496 4624 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262500 4624 flags.go:64] FLAG: --enable-load-reader="false" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262504 4624 flags.go:64] FLAG: --enable-server="true" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262508 4624 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262514 4624 flags.go:64] FLAG: --event-burst="100" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262518 4624 flags.go:64] FLAG: --event-qps="50" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262522 4624 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262526 4624 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262531 4624 flags.go:64] FLAG: --eviction-hard="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262536 4624 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262540 4624 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262544 4624 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262548 4624 flags.go:64] FLAG: --eviction-soft="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262552 4624 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262556 4624 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262560 4624 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262564 4624 flags.go:64] FLAG: --experimental-mounter-path="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262568 4624 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262572 4624 flags.go:64] FLAG: --fail-swap-on="true" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262576 4624 flags.go:64] FLAG: --feature-gates="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262587 4624 flags.go:64] FLAG: --file-check-frequency="20s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262591 4624 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262596 4624 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262600 4624 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262604 4624 flags.go:64] FLAG: --healthz-port="10248" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262609 4624 flags.go:64] FLAG: --help="false" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262613 4624 flags.go:64] FLAG: --hostname-override="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262617 4624 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262621 4624 flags.go:64] FLAG: --http-check-frequency="20s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262625 4624 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262643 4624 flags.go:64] FLAG: --image-credential-provider-config="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262649 4624 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262653 4624 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262657 4624 flags.go:64] FLAG: --image-service-endpoint="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262661 4624 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262665 4624 flags.go:64] FLAG: --kube-api-burst="100" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262669 4624 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262674 4624 flags.go:64] FLAG: --kube-api-qps="50" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262677 4624 flags.go:64] FLAG: --kube-reserved="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262681 4624 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262685 4624 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262690 4624 flags.go:64] FLAG: --kubelet-cgroups="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262693 4624 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262697 4624 flags.go:64] FLAG: --lock-file="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262703 4624 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262707 4624 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262711 4624 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262717 4624 flags.go:64] FLAG: --log-json-split-stream="false" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262721 4624 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262726 4624 flags.go:64] FLAG: --log-text-split-stream="false" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262730 4624 flags.go:64] FLAG: --logging-format="text" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262734 4624 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262738 4624 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262742 4624 flags.go:64] FLAG: --manifest-url="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262746 4624 flags.go:64] FLAG: --manifest-url-header="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262752 4624 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262756 4624 flags.go:64] FLAG: --max-open-files="1000000" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262761 4624 flags.go:64] FLAG: --max-pods="110" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262765 4624 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262769 4624 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262773 4624 flags.go:64] FLAG: --memory-manager-policy="None" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262777 4624 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262782 4624 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262786 4624 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262791 4624 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262800 4624 flags.go:64] FLAG: --node-status-max-images="50" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262804 4624 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262808 4624 flags.go:64] FLAG: --oom-score-adj="-999" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262813 4624 flags.go:64] FLAG: --pod-cidr="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262817 4624 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262823 4624 flags.go:64] FLAG: --pod-manifest-path="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262827 4624 flags.go:64] FLAG: --pod-max-pids="-1" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262831 4624 flags.go:64] FLAG: --pods-per-core="0" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262835 4624 flags.go:64] FLAG: --port="10250" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262839 4624 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262843 4624 flags.go:64] FLAG: --provider-id="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262847 4624 flags.go:64] FLAG: --qos-reserved="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262851 4624 flags.go:64] FLAG: --read-only-port="10255" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262855 4624 flags.go:64] FLAG: --register-node="true" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262859 4624 flags.go:64] FLAG: --register-schedulable="true" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262864 4624 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262871 4624 flags.go:64] FLAG: --registry-burst="10" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262875 4624 flags.go:64] FLAG: --registry-qps="5" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262878 4624 flags.go:64] FLAG: --reserved-cpus="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262883 4624 flags.go:64] FLAG: --reserved-memory="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262888 4624 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262892 4624 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262896 4624 flags.go:64] FLAG: --rotate-certificates="false" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262900 4624 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262904 4624 flags.go:64] FLAG: --runonce="false" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262908 4624 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262913 4624 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262917 4624 flags.go:64] FLAG: --seccomp-default="false" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262921 4624 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262925 4624 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262929 4624 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262933 4624 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262970 4624 flags.go:64] FLAG: --storage-driver-password="root" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262977 4624 flags.go:64] FLAG: --storage-driver-secure="false" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262982 4624 flags.go:64] FLAG: --storage-driver-table="stats" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262987 4624 flags.go:64] FLAG: --storage-driver-user="root" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262991 4624 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.262997 4624 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.263002 4624 flags.go:64] FLAG: --system-cgroups="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.263006 4624 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.263015 4624 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.263020 4624 flags.go:64] FLAG: --tls-cert-file="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.263024 4624 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.263030 4624 flags.go:64] FLAG: --tls-min-version="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.263034 4624 flags.go:64] FLAG: --tls-private-key-file="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.263038 4624 flags.go:64] FLAG: --topology-manager-policy="none" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.263042 4624 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.263046 4624 flags.go:64] FLAG: --topology-manager-scope="container" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.263050 4624 flags.go:64] FLAG: --v="2" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.263061 4624 flags.go:64] FLAG: --version="false" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.263066 4624 flags.go:64] FLAG: --vmodule="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.263072 4624 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.263076 4624 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263169 4624 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263175 4624 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263182 4624 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263189 4624 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263193 4624 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263197 4624 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263201 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263204 4624 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263208 4624 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263212 4624 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263216 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263220 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263223 4624 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263226 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263230 4624 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263234 4624 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263238 4624 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263242 4624 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263247 4624 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263251 4624 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263256 4624 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263261 4624 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263264 4624 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263268 4624 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263272 4624 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263275 4624 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263279 4624 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263282 4624 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263286 4624 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263289 4624 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263293 4624 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263298 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263302 4624 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263306 4624 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263312 4624 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263318 4624 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263322 4624 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263326 4624 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263329 4624 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263332 4624 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263336 4624 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263340 4624 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263343 4624 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263347 4624 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263350 4624 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263354 4624 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263358 4624 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263361 4624 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263364 4624 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263368 4624 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263371 4624 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263374 4624 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263378 4624 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263382 4624 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263385 4624 feature_gate.go:330] unrecognized feature gate: Example Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263388 4624 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263392 4624 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263396 4624 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263399 4624 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263402 4624 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263406 4624 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263409 4624 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263413 4624 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263417 4624 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263421 4624 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263424 4624 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263429 4624 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263434 4624 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263437 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263442 4624 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.263448 4624 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.263454 4624 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.271150 4624 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.271196 4624 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271264 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271274 4624 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271279 4624 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271284 4624 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271287 4624 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271291 4624 feature_gate.go:330] unrecognized feature gate: Example Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271296 4624 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271301 4624 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271306 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271311 4624 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271317 4624 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271321 4624 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271325 4624 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271329 4624 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271332 4624 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271336 4624 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271341 4624 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271348 4624 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271353 4624 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271358 4624 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271361 4624 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271365 4624 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271370 4624 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271374 4624 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271378 4624 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271381 4624 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271385 4624 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271389 4624 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271392 4624 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271397 4624 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271402 4624 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271406 4624 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271410 4624 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271414 4624 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271421 4624 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271426 4624 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271430 4624 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271434 4624 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271438 4624 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271441 4624 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271445 4624 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271449 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271454 4624 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271457 4624 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271461 4624 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271464 4624 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271468 4624 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271471 4624 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271475 4624 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271478 4624 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271482 4624 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271485 4624 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271489 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271492 4624 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271496 4624 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271499 4624 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271503 4624 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271506 4624 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271510 4624 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271514 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271684 4624 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271688 4624 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271692 4624 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271718 4624 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271724 4624 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271729 4624 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271733 4624 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271737 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271741 4624 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271746 4624 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271751 4624 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.271757 4624 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271896 4624 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271902 4624 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271906 4624 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271910 4624 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271914 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271917 4624 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271921 4624 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271924 4624 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271928 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271935 4624 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271939 4624 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271942 4624 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271946 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271949 4624 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271953 4624 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271956 4624 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271960 4624 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271964 4624 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271967 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271970 4624 feature_gate.go:330] unrecognized feature gate: Example Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271974 4624 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271978 4624 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271982 4624 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271987 4624 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271991 4624 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271995 4624 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.271999 4624 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272003 4624 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272007 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272011 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272014 4624 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272018 4624 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272023 4624 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272027 4624 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272031 4624 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272035 4624 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272040 4624 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272044 4624 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272048 4624 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272052 4624 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272056 4624 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272059 4624 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272063 4624 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272066 4624 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272070 4624 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272073 4624 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272077 4624 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272080 4624 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272083 4624 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272087 4624 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272093 4624 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272096 4624 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272101 4624 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272106 4624 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272110 4624 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272115 4624 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272119 4624 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272123 4624 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272127 4624 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272132 4624 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272135 4624 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272147 4624 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272152 4624 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272156 4624 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272159 4624 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272163 4624 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272167 4624 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272172 4624 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272176 4624 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272180 4624 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.272184 4624 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.272190 4624 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.273204 4624 server.go:940] "Client rotation is on, will bootstrap in background" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.277407 4624 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.277494 4624 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.279015 4624 server.go:997] "Starting client certificate rotation" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.279044 4624 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.279196 4624 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-08 04:44:13.759449029 +0000 UTC Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.279249 4624 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 2198h21m18.480202483s for next certificate rotation Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.306406 4624 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.312848 4624 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.331739 4624 log.go:25] "Validated CRI v1 runtime API" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.366872 4624 log.go:25] "Validated CRI v1 image API" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.368754 4624 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.373693 4624 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-08-14-11-02-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.373806 4624 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.385295 4624 manager.go:217] Machine: {Timestamp:2025-10-08 14:22:55.383095697 +0000 UTC m=+0.534030794 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d04b3def-39e5-42af-93c0-ddcd07e8aaf4 BootID:e72be8b6-18a0-41a6-a9ba-9d43530841e9 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:56:cc:49 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:56:cc:49 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:bd:74:f7 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:96:2d:59 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b3:58:d9 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:29:1c:10 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:de:fc:3b:80:af:2c Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:66:08:1e:d7:a7:9f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.385606 4624 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.385809 4624 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.387262 4624 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.387532 4624 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.387578 4624 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.387872 4624 topology_manager.go:138] "Creating topology manager with none policy" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.387883 4624 container_manager_linux.go:303] "Creating device plugin manager" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.388312 4624 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.388343 4624 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.389421 4624 state_mem.go:36] "Initialized new in-memory state store" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.389525 4624 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.394158 4624 kubelet.go:418] "Attempting to sync node with API server" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.394187 4624 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.394214 4624 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.394231 4624 kubelet.go:324] "Adding apiserver pod source" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.394245 4624 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.398234 4624 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.399353 4624 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.401842 4624 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.402771 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Oct 08 14:22:55 crc kubenswrapper[4624]: E1008 14:22:55.402930 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.402801 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Oct 08 14:22:55 crc kubenswrapper[4624]: E1008 14:22:55.403099 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.403151 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.403179 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.403190 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.403199 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.403211 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.403219 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.403226 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.403236 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.403244 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.403252 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.403263 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.403270 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.404343 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.404773 4624 server.go:1280] "Started kubelet" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.407870 4624 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.408087 4624 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.408520 4624 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 08 14:22:55 crc systemd[1]: Started Kubernetes Kubelet. Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.410039 4624 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.410075 4624 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.410470 4624 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:03:24.221128726 +0000 UTC Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.410536 4624 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1006h40m28.810597706s for next certificate rotation Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.411180 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.414082 4624 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.414097 4624 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.414109 4624 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 08 14:22:55 crc kubenswrapper[4624]: E1008 14:22:55.414452 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 08 14:22:55 crc kubenswrapper[4624]: E1008 14:22:55.414493 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="200ms" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.419061 4624 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.419089 4624 factory.go:55] Registering systemd factory Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.419099 4624 factory.go:221] Registration of the systemd container factory successfully Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.419482 4624 factory.go:153] Registering CRI-O factory Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.419502 4624 factory.go:221] Registration of the crio container factory successfully Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.419527 4624 factory.go:103] Registering Raw factory Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.419543 4624 manager.go:1196] Started watching for new ooms in manager Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.420070 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Oct 08 14:22:55 crc kubenswrapper[4624]: E1008 14:22:55.420143 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.420168 4624 manager.go:319] Starting recovery of all containers Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.420113 4624 server.go:460] "Adding debug handlers to kubelet server" Oct 08 14:22:55 crc kubenswrapper[4624]: E1008 14:22:55.419459 4624 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.154:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186c8a14955abd86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-08 14:22:55.404752262 +0000 UTC m=+0.555687339,LastTimestamp:2025-10-08 14:22:55.404752262 +0000 UTC m=+0.555687339,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426000 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426049 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426064 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426076 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426087 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426096 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426107 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426117 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426130 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426141 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426151 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426162 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426172 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426186 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426196 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426209 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426221 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426233 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426244 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426256 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426267 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426279 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426290 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426302 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426315 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.426327 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428082 4624 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428210 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428266 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428284 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428296 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428306 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428319 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428331 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428344 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428356 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428368 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428380 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428391 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428404 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428415 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428426 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428468 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428480 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428494 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428506 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428518 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428529 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428541 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428553 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428566 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428578 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428589 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428606 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428619 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428657 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428674 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428687 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428700 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428713 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428725 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428737 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428750 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428762 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428773 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428786 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428797 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428809 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428821 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428832 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428844 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428857 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428868 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428879 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428891 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428902 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428914 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428925 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428939 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428951 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428966 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428979 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.428991 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429004 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429017 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429029 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429041 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429054 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429067 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429079 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429092 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429104 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429115 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429128 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429139 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429151 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429165 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429177 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429189 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429200 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429212 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429225 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429237 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429249 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429262 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429279 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429294 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429307 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429319 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429332 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429344 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429357 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429371 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429384 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429397 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429410 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429423 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429434 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429453 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429466 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429478 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429489 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429501 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429512 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429523 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429537 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429550 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429561 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429574 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429587 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429599 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429612 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429623 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429651 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429664 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429676 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429693 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429706 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429718 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429729 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429740 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429750 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429760 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429771 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429784 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429823 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429836 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429849 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429861 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429872 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429884 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429896 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429906 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429918 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429930 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429940 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429951 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429962 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429973 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.429985 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430016 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430028 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430039 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430051 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430062 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430072 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430083 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430093 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430104 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430115 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430127 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430138 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430149 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430499 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430515 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430527 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430540 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430552 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430562 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430574 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430584 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430595 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430606 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430617 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430627 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430654 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430665 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430675 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430686 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430698 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430709 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430720 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430731 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430742 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430752 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430763 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430773 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430784 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430794 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430805 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430817 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430831 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430841 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430875 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430887 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430898 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430911 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430922 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430933 4624 reconstruct.go:97] "Volume reconstruction finished" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.430942 4624 reconciler.go:26] "Reconciler: start to sync state" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.432275 4624 manager.go:324] Recovery completed Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.445246 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.447682 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.447718 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.447726 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.448605 4624 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.448627 4624 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.448666 4624 state_mem.go:36] "Initialized new in-memory state store" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.462342 4624 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.463944 4624 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.464131 4624 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.464412 4624 kubelet.go:2335] "Starting kubelet main sync loop" Oct 08 14:22:55 crc kubenswrapper[4624]: E1008 14:22:55.464548 4624 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.465014 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Oct 08 14:22:55 crc kubenswrapper[4624]: E1008 14:22:55.465077 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.472012 4624 policy_none.go:49] "None policy: Start" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.472563 4624 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.472588 4624 state_mem.go:35] "Initializing new in-memory state store" Oct 08 14:22:55 crc kubenswrapper[4624]: E1008 14:22:55.514927 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.519713 4624 manager.go:334] "Starting Device Plugin manager" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.520014 4624 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.520032 4624 server.go:79] "Starting device plugin registration server" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.520784 4624 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.520821 4624 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.521150 4624 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.521215 4624 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.521222 4624 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 08 14:22:55 crc kubenswrapper[4624]: E1008 14:22:55.528662 4624 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.565497 4624 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.565590 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.570515 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.570561 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.570575 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.570895 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.571179 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.571221 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.571912 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.571940 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.571955 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.571980 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.572010 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.572021 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.572179 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.572350 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.572397 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.572937 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.572959 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.572968 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.573020 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.573035 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.573043 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.573096 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.573207 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.573237 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.573648 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.573674 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.573684 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.573801 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.573951 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.573990 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.574004 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.574020 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.574028 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.574353 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.574378 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.574387 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.574493 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.574516 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.574543 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.574568 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.574577 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.575038 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.575061 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.575068 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:55 crc kubenswrapper[4624]: E1008 14:22:55.615108 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="400ms" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.620989 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.622317 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.622392 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.622407 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.622437 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 14:22:55 crc kubenswrapper[4624]: E1008 14:22:55.622944 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.632659 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.633083 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.633120 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.633144 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.633167 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.633188 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.633276 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.633326 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.633746 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.633775 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.633813 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.633853 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.633896 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.633931 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.633954 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736447 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736519 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736549 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736579 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736608 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736630 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736661 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736665 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736694 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736671 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736765 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736789 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736807 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736840 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736838 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736897 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736918 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736867 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.736999 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.737025 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.737049 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.737076 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.737096 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.737117 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.737353 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.737377 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.737392 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.737421 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.737430 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.737463 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.823375 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.824682 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.824742 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.824753 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.824785 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 14:22:55 crc kubenswrapper[4624]: E1008 14:22:55.825231 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.899730 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.920919 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.938431 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-a8fcb54cd9bccc7cba6b56b7f00c9a067bd5577ef88d591e0b08d66ed6fc27e3 WatchSource:0}: Error finding container a8fcb54cd9bccc7cba6b56b7f00c9a067bd5577ef88d591e0b08d66ed6fc27e3: Status 404 returned error can't find the container with id a8fcb54cd9bccc7cba6b56b7f00c9a067bd5577ef88d591e0b08d66ed6fc27e3 Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.940949 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.965477 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-b5666f7ab792ac27ce2127412ebba8e23aaac47bbc9908c96f595df542e6d5e1 WatchSource:0}: Error finding container b5666f7ab792ac27ce2127412ebba8e23aaac47bbc9908c96f595df542e6d5e1: Status 404 returned error can't find the container with id b5666f7ab792ac27ce2127412ebba8e23aaac47bbc9908c96f595df542e6d5e1 Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.968169 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.973090 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-aa1b1a96567f49885a66b7a650e53daf49ef5ef764d3d489e1f3c796cd00dc49 WatchSource:0}: Error finding container aa1b1a96567f49885a66b7a650e53daf49ef5ef764d3d489e1f3c796cd00dc49: Status 404 returned error can't find the container with id aa1b1a96567f49885a66b7a650e53daf49ef5ef764d3d489e1f3c796cd00dc49 Oct 08 14:22:55 crc kubenswrapper[4624]: I1008 14:22:55.975326 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.985267 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-08638867836a9c0820eec2484bcac5eadae0b8b0de173e220e881e2dc8683ec2 WatchSource:0}: Error finding container 08638867836a9c0820eec2484bcac5eadae0b8b0de173e220e881e2dc8683ec2: Status 404 returned error can't find the container with id 08638867836a9c0820eec2484bcac5eadae0b8b0de173e220e881e2dc8683ec2 Oct 08 14:22:55 crc kubenswrapper[4624]: W1008 14:22:55.998676 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-69aa52e3fb8281fdc8cf98870e4643228d3fd907176f4f1a5cf9d76f69bcf402 WatchSource:0}: Error finding container 69aa52e3fb8281fdc8cf98870e4643228d3fd907176f4f1a5cf9d76f69bcf402: Status 404 returned error can't find the container with id 69aa52e3fb8281fdc8cf98870e4643228d3fd907176f4f1a5cf9d76f69bcf402 Oct 08 14:22:56 crc kubenswrapper[4624]: E1008 14:22:56.016177 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="800ms" Oct 08 14:22:56 crc kubenswrapper[4624]: I1008 14:22:56.226132 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:56 crc kubenswrapper[4624]: I1008 14:22:56.227360 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:56 crc kubenswrapper[4624]: I1008 14:22:56.227390 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:56 crc kubenswrapper[4624]: I1008 14:22:56.227403 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:56 crc kubenswrapper[4624]: I1008 14:22:56.227430 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 14:22:56 crc kubenswrapper[4624]: E1008 14:22:56.228044 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Oct 08 14:22:56 crc kubenswrapper[4624]: W1008 14:22:56.301287 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Oct 08 14:22:56 crc kubenswrapper[4624]: E1008 14:22:56.301385 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Oct 08 14:22:56 crc kubenswrapper[4624]: W1008 14:22:56.391083 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Oct 08 14:22:56 crc kubenswrapper[4624]: E1008 14:22:56.391157 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Oct 08 14:22:56 crc kubenswrapper[4624]: I1008 14:22:56.412155 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Oct 08 14:22:56 crc kubenswrapper[4624]: I1008 14:22:56.468318 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b5666f7ab792ac27ce2127412ebba8e23aaac47bbc9908c96f595df542e6d5e1"} Oct 08 14:22:56 crc kubenswrapper[4624]: I1008 14:22:56.469381 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a8fcb54cd9bccc7cba6b56b7f00c9a067bd5577ef88d591e0b08d66ed6fc27e3"} Oct 08 14:22:56 crc kubenswrapper[4624]: I1008 14:22:56.470245 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"69aa52e3fb8281fdc8cf98870e4643228d3fd907176f4f1a5cf9d76f69bcf402"} Oct 08 14:22:56 crc kubenswrapper[4624]: I1008 14:22:56.472437 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"08638867836a9c0820eec2484bcac5eadae0b8b0de173e220e881e2dc8683ec2"} Oct 08 14:22:56 crc kubenswrapper[4624]: I1008 14:22:56.473148 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aa1b1a96567f49885a66b7a650e53daf49ef5ef764d3d489e1f3c796cd00dc49"} Oct 08 14:22:56 crc kubenswrapper[4624]: E1008 14:22:56.817027 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="1.6s" Oct 08 14:22:56 crc kubenswrapper[4624]: W1008 14:22:56.846321 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Oct 08 14:22:56 crc kubenswrapper[4624]: E1008 14:22:56.846413 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Oct 08 14:22:56 crc kubenswrapper[4624]: W1008 14:22:56.943476 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Oct 08 14:22:56 crc kubenswrapper[4624]: E1008 14:22:56.943573 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.028852 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.030069 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.030112 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.030124 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.030155 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 14:22:57 crc kubenswrapper[4624]: E1008 14:22:57.030624 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.413290 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.477086 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654"} Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.477135 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5"} Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.477148 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe"} Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.477158 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2"} Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.477274 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.478171 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.478199 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.478208 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.479042 4624 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5" exitCode=0 Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.479092 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5"} Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.479122 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.479877 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.479893 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.479902 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.480146 4624 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="843e099fe1ed03b431523e327013b3a50cb666b200623a9f03eb20df5a250136" exitCode=0 Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.480194 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"843e099fe1ed03b431523e327013b3a50cb666b200623a9f03eb20df5a250136"} Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.480313 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.481081 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.481612 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.481655 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.481668 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.481922 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.481945 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.481953 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.482157 4624 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="27753b9440b751e4763b3f99292d88307f7794740ff8c352e86b9cac4d21d3ff" exitCode=0 Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.482213 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"27753b9440b751e4763b3f99292d88307f7794740ff8c352e86b9cac4d21d3ff"} Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.482227 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.482972 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.482994 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.483005 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.486124 4624 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d" exitCode=0 Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.486162 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d"} Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.486195 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.486867 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.486896 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:57 crc kubenswrapper[4624]: I1008 14:22:57.486910 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.413178 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Oct 08 14:22:58 crc kubenswrapper[4624]: E1008 14:22:58.417933 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="3.2s" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.492939 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"df2ce3e7ea0415656d449e2f2b4677e2ff8654984ff2417e29fd6a240938eefc"} Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.492999 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"66582c16b1a735d597ac77df3780cd199953116b97835aed248c275ff3eea23c"} Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.493017 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"920d41c8a17e549dbc9e474897693799091e5686977403a61bfef3b5373963ce"} Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.493086 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:58 crc kubenswrapper[4624]: W1008 14:22:58.494016 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Oct 08 14:22:58 crc kubenswrapper[4624]: E1008 14:22:58.494153 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.494297 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.494329 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.494339 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.496681 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728"} Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.496718 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5"} Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.496728 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7"} Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.496739 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d"} Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.498904 4624 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7013313bd37d8a90c3cdcff6bbcc5c476c4ec68773670309a76c758f8adfbe36" exitCode=0 Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.499052 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.499047 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7013313bd37d8a90c3cdcff6bbcc5c476c4ec68773670309a76c758f8adfbe36"} Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.500529 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.500562 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.500573 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.502578 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.503049 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.503109 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7d4c296167c0ab7db52a78baf21298a70a2f9c3dfc146bdecc4bcc809abac7fd"} Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.503865 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.503898 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.503915 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.504081 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.504113 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.504124 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.631141 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.633478 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.633526 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.633543 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:58 crc kubenswrapper[4624]: I1008 14:22:58.633575 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 14:22:58 crc kubenswrapper[4624]: E1008 14:22:58.634135 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Oct 08 14:22:58 crc kubenswrapper[4624]: W1008 14:22:58.937145 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Oct 08 14:22:58 crc kubenswrapper[4624]: E1008 14:22:58.937229 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Oct 08 14:22:59 crc kubenswrapper[4624]: W1008 14:22:59.226774 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Oct 08 14:22:59 crc kubenswrapper[4624]: E1008 14:22:59.226854 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.413269 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.507623 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2ab07b649314c5dcba30feadec2fa442b1b4319e52ffad5ba6a12a233ea8adaa"} Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.507760 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.508528 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.508583 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.508595 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.510376 4624 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="76289094a837baf714d3d99145d3ba4b0ddf22eee3aee73a0f0cbf8edbe91c3d" exitCode=0 Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.510412 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"76289094a837baf714d3d99145d3ba4b0ddf22eee3aee73a0f0cbf8edbe91c3d"} Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.510462 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.510495 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.510583 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.510689 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.511396 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.511428 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.511404 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.511439 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.511452 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.511467 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.511522 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.511552 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:22:59 crc kubenswrapper[4624]: I1008 14:22:59.511563 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:22:59 crc kubenswrapper[4624]: W1008 14:22:59.750794 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Oct 08 14:22:59 crc kubenswrapper[4624]: E1008 14:22:59.750873 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Oct 08 14:23:00 crc kubenswrapper[4624]: I1008 14:23:00.397803 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 14:23:00 crc kubenswrapper[4624]: I1008 14:23:00.397940 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:00 crc kubenswrapper[4624]: I1008 14:23:00.398904 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:00 crc kubenswrapper[4624]: I1008 14:23:00.398941 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:00 crc kubenswrapper[4624]: I1008 14:23:00.398951 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:00 crc kubenswrapper[4624]: I1008 14:23:00.514542 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 08 14:23:00 crc kubenswrapper[4624]: I1008 14:23:00.516448 4624 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2ab07b649314c5dcba30feadec2fa442b1b4319e52ffad5ba6a12a233ea8adaa" exitCode=255 Oct 08 14:23:00 crc kubenswrapper[4624]: I1008 14:23:00.516522 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2ab07b649314c5dcba30feadec2fa442b1b4319e52ffad5ba6a12a233ea8adaa"} Oct 08 14:23:00 crc kubenswrapper[4624]: I1008 14:23:00.516574 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:00 crc kubenswrapper[4624]: I1008 14:23:00.517273 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:00 crc kubenswrapper[4624]: I1008 14:23:00.517307 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:00 crc kubenswrapper[4624]: I1008 14:23:00.517315 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:00 crc kubenswrapper[4624]: I1008 14:23:00.517834 4624 scope.go:117] "RemoveContainer" containerID="2ab07b649314c5dcba30feadec2fa442b1b4319e52ffad5ba6a12a233ea8adaa" Oct 08 14:23:00 crc kubenswrapper[4624]: I1008 14:23:00.521524 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f9ac387fbfba05fb1a980dfae189ec2b056670f47c1da3f1427577da56d82b8d"} Oct 08 14:23:00 crc kubenswrapper[4624]: I1008 14:23:00.521572 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b958a067b9c705e07c70ac2747480305a5933d9d824a59390da41d9488b2e23c"} Oct 08 14:23:00 crc kubenswrapper[4624]: I1008 14:23:00.521584 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"08eed0232a0273f5326cd922fc000a06527761fc723442190c0e5acfc40bfb3e"} Oct 08 14:23:00 crc kubenswrapper[4624]: I1008 14:23:00.748792 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:23:00 crc kubenswrapper[4624]: I1008 14:23:00.958481 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.528374 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cb5d516aecdb6ccb8628ddfd085c1040d4a298f1675df47e2582fb5139996e20"} Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.528420 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"332151d1c124ea8567578026dab3d484f0866133cba6537a923a2fb8ef6a07e6"} Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.528465 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.529177 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.529203 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.529211 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.530167 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.531417 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49"} Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.531533 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.531604 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.532201 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.532226 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.532235 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.834301 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.835422 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.835463 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.835476 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.835504 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.874089 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.874234 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.875159 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.875192 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.875205 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:01 crc kubenswrapper[4624]: I1008 14:23:01.969178 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:23:02 crc kubenswrapper[4624]: I1008 14:23:02.489733 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 14:23:02 crc kubenswrapper[4624]: I1008 14:23:02.534081 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 14:23:02 crc kubenswrapper[4624]: I1008 14:23:02.534107 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:02 crc kubenswrapper[4624]: I1008 14:23:02.534211 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:02 crc kubenswrapper[4624]: I1008 14:23:02.534113 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:02 crc kubenswrapper[4624]: I1008 14:23:02.535482 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:02 crc kubenswrapper[4624]: I1008 14:23:02.535516 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:02 crc kubenswrapper[4624]: I1008 14:23:02.535517 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:02 crc kubenswrapper[4624]: I1008 14:23:02.535527 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:02 crc kubenswrapper[4624]: I1008 14:23:02.535540 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:02 crc kubenswrapper[4624]: I1008 14:23:02.535551 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:02 crc kubenswrapper[4624]: I1008 14:23:02.536507 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:02 crc kubenswrapper[4624]: I1008 14:23:02.536608 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:02 crc kubenswrapper[4624]: I1008 14:23:02.536730 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:03 crc kubenswrapper[4624]: I1008 14:23:03.160071 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 14:23:03 crc kubenswrapper[4624]: I1008 14:23:03.168084 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 14:23:03 crc kubenswrapper[4624]: I1008 14:23:03.397913 4624 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 08 14:23:03 crc kubenswrapper[4624]: I1008 14:23:03.398006 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 14:23:03 crc kubenswrapper[4624]: I1008 14:23:03.536558 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:03 crc kubenswrapper[4624]: I1008 14:23:03.536627 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 14:23:03 crc kubenswrapper[4624]: I1008 14:23:03.536712 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:03 crc kubenswrapper[4624]: I1008 14:23:03.537594 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:03 crc kubenswrapper[4624]: I1008 14:23:03.537654 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:03 crc kubenswrapper[4624]: I1008 14:23:03.537666 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:03 crc kubenswrapper[4624]: I1008 14:23:03.538211 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:03 crc kubenswrapper[4624]: I1008 14:23:03.538254 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:03 crc kubenswrapper[4624]: I1008 14:23:03.538274 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:04 crc kubenswrapper[4624]: I1008 14:23:04.229387 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 08 14:23:04 crc kubenswrapper[4624]: I1008 14:23:04.229594 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:04 crc kubenswrapper[4624]: I1008 14:23:04.231101 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:04 crc kubenswrapper[4624]: I1008 14:23:04.231145 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:04 crc kubenswrapper[4624]: I1008 14:23:04.231170 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:04 crc kubenswrapper[4624]: I1008 14:23:04.538354 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:04 crc kubenswrapper[4624]: I1008 14:23:04.539289 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:04 crc kubenswrapper[4624]: I1008 14:23:04.539335 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:04 crc kubenswrapper[4624]: I1008 14:23:04.539346 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:04 crc kubenswrapper[4624]: I1008 14:23:04.983434 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:23:04 crc kubenswrapper[4624]: I1008 14:23:04.983599 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:04 crc kubenswrapper[4624]: I1008 14:23:04.984688 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:04 crc kubenswrapper[4624]: I1008 14:23:04.984716 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:04 crc kubenswrapper[4624]: I1008 14:23:04.984725 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:05 crc kubenswrapper[4624]: I1008 14:23:05.499253 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 14:23:05 crc kubenswrapper[4624]: I1008 14:23:05.499453 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:05 crc kubenswrapper[4624]: I1008 14:23:05.500603 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:05 crc kubenswrapper[4624]: I1008 14:23:05.500667 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:05 crc kubenswrapper[4624]: I1008 14:23:05.500684 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:05 crc kubenswrapper[4624]: E1008 14:23:05.528788 4624 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 08 14:23:10 crc kubenswrapper[4624]: I1008 14:23:10.096436 4624 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 08 14:23:10 crc kubenswrapper[4624]: I1008 14:23:10.096494 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 08 14:23:10 crc kubenswrapper[4624]: I1008 14:23:10.135999 4624 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Oct 08 14:23:10 crc kubenswrapper[4624]: I1008 14:23:10.136293 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 08 14:23:10 crc kubenswrapper[4624]: I1008 14:23:10.551014 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 08 14:23:10 crc kubenswrapper[4624]: I1008 14:23:10.551673 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:10 crc kubenswrapper[4624]: I1008 14:23:10.552781 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:10 crc kubenswrapper[4624]: I1008 14:23:10.552813 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:10 crc kubenswrapper[4624]: I1008 14:23:10.552821 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:10 crc kubenswrapper[4624]: I1008 14:23:10.719731 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 08 14:23:10 crc kubenswrapper[4624]: I1008 14:23:10.749983 4624 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Oct 08 14:23:10 crc kubenswrapper[4624]: I1008 14:23:10.750038 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Oct 08 14:23:11 crc kubenswrapper[4624]: I1008 14:23:11.553917 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:11 crc kubenswrapper[4624]: I1008 14:23:11.555151 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:11 crc kubenswrapper[4624]: I1008 14:23:11.555181 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:11 crc kubenswrapper[4624]: I1008 14:23:11.555195 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:11 crc kubenswrapper[4624]: I1008 14:23:11.569624 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 08 14:23:11 crc kubenswrapper[4624]: I1008 14:23:11.975209 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:23:11 crc kubenswrapper[4624]: I1008 14:23:11.975341 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:11 crc kubenswrapper[4624]: I1008 14:23:11.975810 4624 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Oct 08 14:23:11 crc kubenswrapper[4624]: I1008 14:23:11.975857 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Oct 08 14:23:11 crc kubenswrapper[4624]: I1008 14:23:11.976369 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:11 crc kubenswrapper[4624]: I1008 14:23:11.976396 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:11 crc kubenswrapper[4624]: I1008 14:23:11.976406 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:11 crc kubenswrapper[4624]: I1008 14:23:11.978859 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:23:12 crc kubenswrapper[4624]: I1008 14:23:12.493881 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 14:23:12 crc kubenswrapper[4624]: I1008 14:23:12.494220 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:12 crc kubenswrapper[4624]: I1008 14:23:12.495382 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:12 crc kubenswrapper[4624]: I1008 14:23:12.495429 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:12 crc kubenswrapper[4624]: I1008 14:23:12.495440 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:12 crc kubenswrapper[4624]: I1008 14:23:12.555932 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:12 crc kubenswrapper[4624]: I1008 14:23:12.556033 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:12 crc kubenswrapper[4624]: I1008 14:23:12.556955 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:12 crc kubenswrapper[4624]: I1008 14:23:12.556985 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:12 crc kubenswrapper[4624]: I1008 14:23:12.556955 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:12 crc kubenswrapper[4624]: I1008 14:23:12.557021 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:12 crc kubenswrapper[4624]: I1008 14:23:12.556949 4624 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Oct 08 14:23:12 crc kubenswrapper[4624]: I1008 14:23:12.557044 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:12 crc kubenswrapper[4624]: I1008 14:23:12.557072 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Oct 08 14:23:12 crc kubenswrapper[4624]: I1008 14:23:12.556995 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:13 crc kubenswrapper[4624]: I1008 14:23:13.398828 4624 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 08 14:23:13 crc kubenswrapper[4624]: I1008 14:23:13.399144 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 08 14:23:14 crc kubenswrapper[4624]: I1008 14:23:14.984586 4624 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Oct 08 14:23:14 crc kubenswrapper[4624]: I1008 14:23:14.984652 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.083229 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.086591 4624 trace.go:236] Trace[1144558652]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Oct-2025 14:23:04.581) (total time: 10505ms): Oct 08 14:23:15 crc kubenswrapper[4624]: Trace[1144558652]: ---"Objects listed" error: 10505ms (14:23:15.086) Oct 08 14:23:15 crc kubenswrapper[4624]: Trace[1144558652]: [10.505083712s] [10.505083712s] END Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.086624 4624 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.087603 4624 trace.go:236] Trace[579619525]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Oct-2025 14:23:04.412) (total time: 10675ms): Oct 08 14:23:15 crc kubenswrapper[4624]: Trace[579619525]: ---"Objects listed" error: 10675ms (14:23:15.087) Oct 08 14:23:15 crc kubenswrapper[4624]: Trace[579619525]: [10.675435888s] [10.675435888s] END Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.087627 4624 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.087599 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.088401 4624 trace.go:236] Trace[2097445272]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Oct-2025 14:23:04.929) (total time: 10158ms): Oct 08 14:23:15 crc kubenswrapper[4624]: Trace[2097445272]: ---"Objects listed" error: 10158ms (14:23:15.088) Oct 08 14:23:15 crc kubenswrapper[4624]: Trace[2097445272]: [10.158886987s] [10.158886987s] END Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.088420 4624 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.089081 4624 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.107193 4624 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.407439 4624 apiserver.go:52] "Watching apiserver" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.410814 4624 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.411079 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-q5jzf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.411410 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.411589 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.411522 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.411532 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.411662 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.411845 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-q5jzf" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.411884 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.412350 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.412391 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.412415 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.414596 4624 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.416462 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.416575 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.416859 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.416891 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.416896 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.422181 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.422718 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.422852 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.422952 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.426542 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.426895 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.427004 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.462024 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.481385 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.505575 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.509799 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.509947 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510004 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510033 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510080 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510100 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510343 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510365 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510387 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510412 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510433 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510454 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510473 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510491 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510510 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510531 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510553 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510579 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510601 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510620 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510660 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510683 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510703 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510726 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510746 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510767 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510788 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510813 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510835 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510856 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510880 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510901 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510923 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510945 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510971 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.510993 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511015 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511037 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511056 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511086 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511110 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511131 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511152 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511173 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511195 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511216 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511239 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511259 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511282 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511302 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511327 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511348 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511372 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511392 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511414 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511457 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511479 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511506 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511528 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511550 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511569 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511591 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511612 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511652 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511676 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511698 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511719 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511741 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511767 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511791 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511811 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511831 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.511850 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512081 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512117 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512143 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512165 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512258 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512283 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512308 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512329 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512350 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512372 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512394 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512417 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512472 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512498 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512523 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512647 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512675 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512703 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512728 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512753 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512778 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512802 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512824 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512848 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512870 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512892 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512913 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512935 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512958 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.512980 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513005 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513030 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513054 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513114 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513141 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513165 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513187 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513211 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513232 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513256 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513279 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513302 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513324 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513348 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513368 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513390 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513411 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513431 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513452 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513475 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513496 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513517 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513543 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513569 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513592 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513623 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513668 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513694 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513717 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513746 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513768 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513793 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513819 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513849 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513874 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513902 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513927 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513951 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.513975 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514000 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514026 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514050 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514073 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514099 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514123 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514149 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514171 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514198 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514221 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514245 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514269 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514296 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514320 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514345 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514371 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514396 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514419 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514446 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514471 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514497 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514520 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514547 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514572 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514596 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514618 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514662 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514686 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514712 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514735 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514759 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514784 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514807 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514852 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514877 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514900 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514922 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514945 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514971 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.514994 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.516163 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.516205 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.516273 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.516305 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.516422 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.516542 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.516554 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.516790 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.516838 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.516963 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:23:16.016945115 +0000 UTC m=+21.167880192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517044 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517049 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517076 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517123 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517267 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517297 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517335 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517372 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517399 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517423 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517451 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517457 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517486 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517509 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517536 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517561 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517586 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517610 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517615 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517650 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517679 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517704 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517746 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517777 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rppg5\" (UniqueName: \"kubernetes.io/projected/571dc074-be0f-40e1-92cd-96c4d94d6359-kube-api-access-rppg5\") pod \"node-resolver-q5jzf\" (UID: \"571dc074-be0f-40e1-92cd-96c4d94d6359\") " pod="openshift-dns/node-resolver-q5jzf" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517802 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517828 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517853 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517880 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517905 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517932 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517956 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/571dc074-be0f-40e1-92cd-96c4d94d6359-hosts-file\") pod \"node-resolver-q5jzf\" (UID: \"571dc074-be0f-40e1-92cd-96c4d94d6359\") " pod="openshift-dns/node-resolver-q5jzf" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517975 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.517982 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518031 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518039 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518062 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518093 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518117 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518144 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518170 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518193 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518208 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518225 4624 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518239 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518253 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518265 4624 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518279 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518291 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518304 4624 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518317 4624 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518329 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518341 4624 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518355 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518368 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518381 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518395 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518406 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518588 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518408 4624 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518625 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518658 4624 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518671 4624 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518846 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.519110 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.519315 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.519622 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.520181 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.520366 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.520886 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.521032 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.521081 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.521135 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.521431 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.521525 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.521687 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.521823 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.523274 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.523440 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.523522 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.523744 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.524304 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.524335 4624 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.524421 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:16.024397537 +0000 UTC m=+21.175332694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.524545 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.524574 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.524596 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.524674 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.524740 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.524830 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.524767 4624 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.525101 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.525157 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.525217 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:16.024918131 +0000 UTC m=+21.175853318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.525244 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.525252 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.525371 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.525509 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.525646 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.527975 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.528316 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.528568 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.528869 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.528915 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.528970 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.529040 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.529089 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.529121 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.529180 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.529455 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.529574 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.529702 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.529877 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.530052 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.518402 4624 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.531145 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.531530 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.531846 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.532069 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.532098 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.532851 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.532929 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.533348 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.533388 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.538806 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.539069 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.539372 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.539733 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.539750 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.540093 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.541063 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.541402 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.541454 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.541508 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.541776 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.541965 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.542059 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.542317 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.542422 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.542512 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.542568 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.542782 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.542874 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.543096 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.543285 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.543492 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.545896 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.546163 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.542167 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.551313 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.551731 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.552306 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.552472 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.552532 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.552941 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.554033 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.554440 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.554841 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.555665 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.557123 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.557168 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.557602 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.559201 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.559288 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.559471 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.559536 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.559958 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.559978 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.559986 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.560865 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.561102 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.561154 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.561420 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.561718 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.561880 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.561891 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.561923 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.562233 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.562261 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.562532 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.562698 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.562732 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.563185 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.563410 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.563274 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.563666 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.563941 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.564143 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.564781 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.564873 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.565085 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.565432 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.565803 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.566038 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.566191 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.566491 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.566915 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.567583 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.567732 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.568015 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.570049 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.570239 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.570329 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.570801 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.570873 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.571058 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.571111 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.571447 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.571473 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.571487 4624 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.571535 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:16.071519341 +0000 UTC m=+21.222454508 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.571936 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.572447 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.572818 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.572883 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.572993 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.573014 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.576680 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.576941 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.576982 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.577320 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.577612 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.578125 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.578548 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.579038 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.582723 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.583686 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.584496 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.584586 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.585119 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.586004 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.586482 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.586497 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.586712 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.586859 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.586896 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.586932 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.587728 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.589224 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.589888 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.590569 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.592716 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.592864 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.593276 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.594082 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.594101 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.594114 4624 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.594159 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:16.094142833 +0000 UTC m=+21.245077990 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.594229 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.595606 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.596094 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.596490 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.597401 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.602571 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.603325 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.605235 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.606204 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.609933 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.612470 4624 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49" exitCode=255 Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.612528 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49"} Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.612573 4624 scope.go:117] "RemoveContainer" containerID="2ab07b649314c5dcba30feadec2fa442b1b4319e52ffad5ba6a12a233ea8adaa" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.613690 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620372 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/571dc074-be0f-40e1-92cd-96c4d94d6359-hosts-file\") pod \"node-resolver-q5jzf\" (UID: \"571dc074-be0f-40e1-92cd-96c4d94d6359\") " pod="openshift-dns/node-resolver-q5jzf" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620452 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620471 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rppg5\" (UniqueName: \"kubernetes.io/projected/571dc074-be0f-40e1-92cd-96c4d94d6359-kube-api-access-rppg5\") pod \"node-resolver-q5jzf\" (UID: \"571dc074-be0f-40e1-92cd-96c4d94d6359\") " pod="openshift-dns/node-resolver-q5jzf" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620486 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620538 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620548 4624 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620556 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620565 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620573 4624 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620583 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620591 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620600 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620608 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620617 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620625 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620648 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620659 4624 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620669 4624 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620712 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620722 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620733 4624 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620744 4624 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620754 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620764 4624 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620773 4624 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620783 4624 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620793 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620812 4624 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620824 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620834 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620842 4624 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620849 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620857 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620865 4624 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620872 4624 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620880 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620888 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620896 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620903 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620911 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620921 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620928 4624 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620936 4624 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620944 4624 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620952 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620962 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620969 4624 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620977 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620984 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.620993 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621001 4624 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621010 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621018 4624 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621026 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621034 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621041 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621049 4624 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621057 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621065 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621073 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621081 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621089 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621096 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621104 4624 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621112 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621120 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621128 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621136 4624 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621144 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621152 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621160 4624 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621167 4624 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621175 4624 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621183 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621191 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621198 4624 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621206 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621214 4624 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621222 4624 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621230 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621238 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621246 4624 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621255 4624 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621263 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621270 4624 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621278 4624 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621286 4624 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621294 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621302 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621309 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621318 4624 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621326 4624 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621333 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621341 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621348 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621356 4624 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621364 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621372 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621384 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621392 4624 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621399 4624 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621408 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621415 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621423 4624 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621431 4624 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621440 4624 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621448 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621455 4624 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621463 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621471 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621478 4624 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621491 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621500 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621507 4624 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621515 4624 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621524 4624 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621531 4624 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621538 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621546 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621553 4624 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621560 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621568 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621575 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621583 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621591 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621599 4624 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621606 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621614 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621622 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621630 4624 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621655 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621664 4624 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621672 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621679 4624 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621687 4624 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621695 4624 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621702 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621710 4624 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621717 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621724 4624 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621732 4624 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621740 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621749 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621757 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621764 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621772 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621780 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621787 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621794 4624 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621802 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621810 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621817 4624 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621824 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621832 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621840 4624 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621848 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621855 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621862 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621870 4624 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621878 4624 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621886 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621894 4624 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621904 4624 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621912 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621919 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621927 4624 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621934 4624 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621941 4624 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621953 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621961 4624 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621969 4624 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621976 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621984 4624 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.621992 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.622000 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.622008 4624 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.622015 4624 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.622023 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.622155 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.622195 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/571dc074-be0f-40e1-92cd-96c4d94d6359-hosts-file\") pod \"node-resolver-q5jzf\" (UID: \"571dc074-be0f-40e1-92cd-96c4d94d6359\") " pod="openshift-dns/node-resolver-q5jzf" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.622215 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.626590 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.632327 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.635935 4624 scope.go:117] "RemoveContainer" containerID="469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49" Oct 08 14:23:15 crc kubenswrapper[4624]: E1008 14:23:15.636090 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.645253 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.648457 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rppg5\" (UniqueName: \"kubernetes.io/projected/571dc074-be0f-40e1-92cd-96c4d94d6359-kube-api-access-rppg5\") pod \"node-resolver-q5jzf\" (UID: \"571dc074-be0f-40e1-92cd-96c4d94d6359\") " pod="openshift-dns/node-resolver-q5jzf" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.660155 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.677111 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.702991 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ab07b649314c5dcba30feadec2fa442b1b4319e52ffad5ba6a12a233ea8adaa\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:22:59Z\\\",\\\"message\\\":\\\"W1008 14:22:58.624553 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1008 14:22:58.625030 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759933378 cert, and key in /tmp/serving-cert-244373698/serving-signer.crt, /tmp/serving-cert-244373698/serving-signer.key\\\\nI1008 14:22:59.374580 1 observer_polling.go:159] Starting file observer\\\\nW1008 14:22:59.378316 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1008 14:22:59.378459 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:22:59.389577 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-244373698/tls.crt::/tmp/serving-cert-244373698/tls.key\\\\\\\"\\\\nF1008 14:22:59.726266 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.713363 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.722404 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.722536 4624 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.722661 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.729554 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.733299 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: W1008 14:23:15.734328 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-99adbdf6a05bb385d75d7900829d070f582ac64c0a67b85b6d1db6459455bd2c WatchSource:0}: Error finding container 99adbdf6a05bb385d75d7900829d070f582ac64c0a67b85b6d1db6459455bd2c: Status 404 returned error can't find the container with id 99adbdf6a05bb385d75d7900829d070f582ac64c0a67b85b6d1db6459455bd2c Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.737350 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 14:23:15 crc kubenswrapper[4624]: W1008 14:23:15.738887 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-8ad2a8bab41a33e6d33f8d3ea533e175dbc94270adf0d4cde554ec2ad15d604a WatchSource:0}: Error finding container 8ad2a8bab41a33e6d33f8d3ea533e175dbc94270adf0d4cde554ec2ad15d604a: Status 404 returned error can't find the container with id 8ad2a8bab41a33e6d33f8d3ea533e175dbc94270adf0d4cde554ec2ad15d604a Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.743180 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.746794 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-q5jzf" Oct 08 14:23:15 crc kubenswrapper[4624]: W1008 14:23:15.746853 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-313bf491e2699e559002531f6c065f5199cc5ec2ec8a5fe097cbdfddc2208365 WatchSource:0}: Error finding container 313bf491e2699e559002531f6c065f5199cc5ec2ec8a5fe097cbdfddc2208365: Status 404 returned error can't find the container with id 313bf491e2699e559002531f6c065f5199cc5ec2ec8a5fe097cbdfddc2208365 Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.749359 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.759840 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.770075 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.777917 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.785601 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.799448 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.813351 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.822201 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.832117 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ab07b649314c5dcba30feadec2fa442b1b4319e52ffad5ba6a12a233ea8adaa\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:22:59Z\\\",\\\"message\\\":\\\"W1008 14:22:58.624553 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1008 14:22:58.625030 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759933378 cert, and key in /tmp/serving-cert-244373698/serving-signer.crt, /tmp/serving-cert-244373698/serving-signer.key\\\\nI1008 14:22:59.374580 1 observer_polling.go:159] Starting file observer\\\\nW1008 14:22:59.378316 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1008 14:22:59.378459 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:22:59.389577 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-244373698/tls.crt::/tmp/serving-cert-244373698/tls.key\\\\\\\"\\\\nF1008 14:22:59.726266 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.843010 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:15 crc kubenswrapper[4624]: I1008 14:23:15.851781 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.025404 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.025481 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.025502 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:16 crc kubenswrapper[4624]: E1008 14:23:16.025590 4624 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 14:23:16 crc kubenswrapper[4624]: E1008 14:23:16.025597 4624 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 14:23:16 crc kubenswrapper[4624]: E1008 14:23:16.025588 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:23:17.025564883 +0000 UTC m=+22.176499960 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:23:16 crc kubenswrapper[4624]: E1008 14:23:16.025666 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:17.025653095 +0000 UTC m=+22.176588172 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 14:23:16 crc kubenswrapper[4624]: E1008 14:23:16.025677 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:17.025672006 +0000 UTC m=+22.176607083 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.126457 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.126505 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:16 crc kubenswrapper[4624]: E1008 14:23:16.126622 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 14:23:16 crc kubenswrapper[4624]: E1008 14:23:16.126656 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 14:23:16 crc kubenswrapper[4624]: E1008 14:23:16.126660 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 14:23:16 crc kubenswrapper[4624]: E1008 14:23:16.126710 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 14:23:16 crc kubenswrapper[4624]: E1008 14:23:16.126721 4624 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:16 crc kubenswrapper[4624]: E1008 14:23:16.126667 4624 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:16 crc kubenswrapper[4624]: E1008 14:23:16.126822 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:17.126756999 +0000 UTC m=+22.277692076 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:16 crc kubenswrapper[4624]: E1008 14:23:16.126839 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:17.126833221 +0000 UTC m=+22.277768298 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.615815 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619"} Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.615889 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"313bf491e2699e559002531f6c065f5199cc5ec2ec8a5fe097cbdfddc2208365"} Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.617716 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d"} Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.617740 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1"} Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.617750 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"99adbdf6a05bb385d75d7900829d070f582ac64c0a67b85b6d1db6459455bd2c"} Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.619131 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.621331 4624 scope.go:117] "RemoveContainer" containerID="469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.621422 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8ad2a8bab41a33e6d33f8d3ea533e175dbc94270adf0d4cde554ec2ad15d604a"} Oct 08 14:23:16 crc kubenswrapper[4624]: E1008 14:23:16.621493 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.622667 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-q5jzf" event={"ID":"571dc074-be0f-40e1-92cd-96c4d94d6359","Type":"ContainerStarted","Data":"78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431"} Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.622712 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-q5jzf" event={"ID":"571dc074-be0f-40e1-92cd-96c4d94d6359","Type":"ContainerStarted","Data":"8e68b3b63cbcec3863b8855795935a5d823677483d1083e84953154ca0532a10"} Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.631954 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ab07b649314c5dcba30feadec2fa442b1b4319e52ffad5ba6a12a233ea8adaa\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:22:59Z\\\",\\\"message\\\":\\\"W1008 14:22:58.624553 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1008 14:22:58.625030 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759933378 cert, and key in /tmp/serving-cert-244373698/serving-signer.crt, /tmp/serving-cert-244373698/serving-signer.key\\\\nI1008 14:22:59.374580 1 observer_polling.go:159] Starting file observer\\\\nW1008 14:22:59.378316 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1008 14:22:59.378459 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:22:59.389577 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-244373698/tls.crt::/tmp/serving-cert-244373698/tls.key\\\\\\\"\\\\nF1008 14:22:59.726266 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.642814 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.654552 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.662898 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.672045 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.680988 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.687551 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.701782 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.723149 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.735616 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.751319 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.760543 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.769285 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.778011 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.786066 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.797364 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.898565 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-47hzf"] Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.898918 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-47hzf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.899813 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-zfrv8"] Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.900082 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-gfq4z"] Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.900224 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.900877 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.900974 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.901220 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.901770 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.901899 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.902089 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.902668 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.902886 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.903054 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.903119 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.903231 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.905164 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.914553 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.918532 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:16Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.932588 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-system-cni-dir\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.932893 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-multus-socket-dir-parent\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.933010 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-host-var-lib-kubelet\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.933121 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a106d69-d531-4ee4-a9ed-505988ebd24d-mcd-auth-proxy-config\") pod \"machine-config-daemon-zfrv8\" (UID: \"8a106d69-d531-4ee4-a9ed-505988ebd24d\") " pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.933236 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-multus-cni-dir\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.933350 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-host-run-k8s-cni-cncf-io\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.933459 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6e2555ab-0f5c-452a-a4c3-273f6a327e06-cnibin\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.933591 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a106d69-d531-4ee4-a9ed-505988ebd24d-proxy-tls\") pod \"machine-config-daemon-zfrv8\" (UID: \"8a106d69-d531-4ee4-a9ed-505988ebd24d\") " pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.933718 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8a106d69-d531-4ee4-a9ed-505988ebd24d-rootfs\") pod \"machine-config-daemon-zfrv8\" (UID: \"8a106d69-d531-4ee4-a9ed-505988ebd24d\") " pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.933834 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xllt\" (UniqueName: \"kubernetes.io/projected/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-kube-api-access-2xllt\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.934131 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6e2555ab-0f5c-452a-a4c3-273f6a327e06-cni-binary-copy\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.934254 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-cnibin\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.934355 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-multus-daemon-config\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.934494 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-host-run-multus-certs\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.934660 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6e2555ab-0f5c-452a-a4c3-273f6a327e06-system-cni-dir\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.934697 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr5w9\" (UniqueName: \"kubernetes.io/projected/6e2555ab-0f5c-452a-a4c3-273f6a327e06-kube-api-access-nr5w9\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.934722 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-host-var-lib-cni-multus\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.934736 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-cni-binary-copy\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.934751 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-host-var-lib-cni-bin\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.934770 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6e2555ab-0f5c-452a-a4c3-273f6a327e06-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.934791 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fptm6\" (UniqueName: \"kubernetes.io/projected/8a106d69-d531-4ee4-a9ed-505988ebd24d-kube-api-access-fptm6\") pod \"machine-config-daemon-zfrv8\" (UID: \"8a106d69-d531-4ee4-a9ed-505988ebd24d\") " pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.934806 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6e2555ab-0f5c-452a-a4c3-273f6a327e06-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.934821 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6e2555ab-0f5c-452a-a4c3-273f6a327e06-os-release\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.934848 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-hostroot\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.934876 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-multus-conf-dir\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.934913 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-os-release\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.934945 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-host-run-netns\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.934963 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-etc-kubernetes\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.936217 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:16Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.955959 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:16Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.970392 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:16Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:16 crc kubenswrapper[4624]: I1008 14:23:16.985601 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:16Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.003576 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.019678 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.035449 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.035538 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-cni-binary-copy\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.035568 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-host-var-lib-cni-bin\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: E1008 14:23:17.035603 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:23:19.035580282 +0000 UTC m=+24.186515379 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.035622 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-host-var-lib-cni-bin\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.035656 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6e2555ab-0f5c-452a-a4c3-273f6a327e06-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.035736 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fptm6\" (UniqueName: \"kubernetes.io/projected/8a106d69-d531-4ee4-a9ed-505988ebd24d-kube-api-access-fptm6\") pod \"machine-config-daemon-zfrv8\" (UID: \"8a106d69-d531-4ee4-a9ed-505988ebd24d\") " pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.035762 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6e2555ab-0f5c-452a-a4c3-273f6a327e06-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.035785 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-hostroot\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.035806 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-multus-conf-dir\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.035852 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6e2555ab-0f5c-452a-a4c3-273f6a327e06-os-release\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.035874 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-os-release\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.035894 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-host-run-netns\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.035914 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-etc-kubernetes\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.035934 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-system-cni-dir\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.035956 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-multus-socket-dir-parent\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.035977 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-host-var-lib-kubelet\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.035996 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-host-run-k8s-cni-cncf-io\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.036033 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a106d69-d531-4ee4-a9ed-505988ebd24d-mcd-auth-proxy-config\") pod \"machine-config-daemon-zfrv8\" (UID: \"8a106d69-d531-4ee4-a9ed-505988ebd24d\") " pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.036051 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-multus-cni-dir\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.036075 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.036098 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.036119 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a106d69-d531-4ee4-a9ed-505988ebd24d-proxy-tls\") pod \"machine-config-daemon-zfrv8\" (UID: \"8a106d69-d531-4ee4-a9ed-505988ebd24d\") " pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.036139 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6e2555ab-0f5c-452a-a4c3-273f6a327e06-cnibin\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.036170 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8a106d69-d531-4ee4-a9ed-505988ebd24d-rootfs\") pod \"machine-config-daemon-zfrv8\" (UID: \"8a106d69-d531-4ee4-a9ed-505988ebd24d\") " pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.036190 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6e2555ab-0f5c-452a-a4c3-273f6a327e06-cni-binary-copy\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.036214 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xllt\" (UniqueName: \"kubernetes.io/projected/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-kube-api-access-2xllt\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.036242 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-multus-daemon-config\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.036288 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-cnibin\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.036336 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-host-run-multus-certs\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.036361 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr5w9\" (UniqueName: \"kubernetes.io/projected/6e2555ab-0f5c-452a-a4c3-273f6a327e06-kube-api-access-nr5w9\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.036386 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6e2555ab-0f5c-452a-a4c3-273f6a327e06-system-cni-dir\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.036409 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-host-var-lib-cni-multus\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.036466 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-host-var-lib-cni-multus\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.036485 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-cni-binary-copy\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.037112 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a106d69-d531-4ee4-a9ed-505988ebd24d-mcd-auth-proxy-config\") pod \"machine-config-daemon-zfrv8\" (UID: \"8a106d69-d531-4ee4-a9ed-505988ebd24d\") " pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:23:17 crc kubenswrapper[4624]: E1008 14:23:17.037197 4624 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 14:23:17 crc kubenswrapper[4624]: E1008 14:23:17.037252 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:19.037240054 +0000 UTC m=+24.188175131 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.037275 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6e2555ab-0f5c-452a-a4c3-273f6a327e06-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.037330 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6e2555ab-0f5c-452a-a4c3-273f6a327e06-cnibin\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:17 crc kubenswrapper[4624]: E1008 14:23:17.037396 4624 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 14:23:17 crc kubenswrapper[4624]: E1008 14:23:17.037429 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:19.037419339 +0000 UTC m=+24.188354416 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.037531 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-hostroot\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.037571 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-multus-conf-dir\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.037570 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8a106d69-d531-4ee4-a9ed-505988ebd24d-rootfs\") pod \"machine-config-daemon-zfrv8\" (UID: \"8a106d69-d531-4ee4-a9ed-505988ebd24d\") " pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.037650 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-cnibin\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.037702 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-host-run-multus-certs\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.037782 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6e2555ab-0f5c-452a-a4c3-273f6a327e06-os-release\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.037190 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-multus-cni-dir\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.037845 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6e2555ab-0f5c-452a-a4c3-273f6a327e06-system-cni-dir\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.037911 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-host-var-lib-kubelet\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.037966 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-multus-socket-dir-parent\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.037968 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6e2555ab-0f5c-452a-a4c3-273f6a327e06-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.038002 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-host-run-k8s-cni-cncf-io\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.037966 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-system-cni-dir\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.038021 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-os-release\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.038041 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-host-run-netns\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.038045 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-etc-kubernetes\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.038271 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6e2555ab-0f5c-452a-a4c3-273f6a327e06-cni-binary-copy\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.038572 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-multus-daemon-config\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.039670 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.060968 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.077905 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.088810 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.101986 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.111986 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.123032 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a106d69-d531-4ee4-a9ed-505988ebd24d-proxy-tls\") pod \"machine-config-daemon-zfrv8\" (UID: \"8a106d69-d531-4ee4-a9ed-505988ebd24d\") " pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.124400 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.125121 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xllt\" (UniqueName: \"kubernetes.io/projected/48aee8dd-6063-4d3c-b65a-f37ce1ccdb82-kube-api-access-2xllt\") pod \"multus-47hzf\" (UID: \"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\") " pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.125255 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr5w9\" (UniqueName: \"kubernetes.io/projected/6e2555ab-0f5c-452a-a4c3-273f6a327e06-kube-api-access-nr5w9\") pod \"multus-additional-cni-plugins-gfq4z\" (UID: \"6e2555ab-0f5c-452a-a4c3-273f6a327e06\") " pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.125838 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fptm6\" (UniqueName: \"kubernetes.io/projected/8a106d69-d531-4ee4-a9ed-505988ebd24d-kube-api-access-fptm6\") pod \"machine-config-daemon-zfrv8\" (UID: \"8a106d69-d531-4ee4-a9ed-505988ebd24d\") " pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.137189 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.137263 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:17 crc kubenswrapper[4624]: E1008 14:23:17.137386 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 14:23:17 crc kubenswrapper[4624]: E1008 14:23:17.137413 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 14:23:17 crc kubenswrapper[4624]: E1008 14:23:17.137426 4624 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:17 crc kubenswrapper[4624]: E1008 14:23:17.137474 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:19.137459685 +0000 UTC m=+24.288394762 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:17 crc kubenswrapper[4624]: E1008 14:23:17.137388 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 14:23:17 crc kubenswrapper[4624]: E1008 14:23:17.137539 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 14:23:17 crc kubenswrapper[4624]: E1008 14:23:17.137552 4624 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:17 crc kubenswrapper[4624]: E1008 14:23:17.137596 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:19.137585538 +0000 UTC m=+24.288520625 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.144696 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.159370 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.171778 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.183329 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.194674 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.210498 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.212620 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-47hzf" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.222370 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.229694 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.309882 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jbsj6"] Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.310803 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.314110 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.314123 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.314315 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.314369 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.314482 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.314507 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.314733 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.326881 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.338174 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.338854 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-kubelet\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.338918 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-systemd-units\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.338936 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpbcq\" (UniqueName: \"kubernetes.io/projected/aad1abb7-073f-4157-b39f-ddc71fbab31d-kube-api-access-kpbcq\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.338976 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-cni-netd\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.338997 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-run-systemd\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.339010 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-log-socket\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.339023 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aad1abb7-073f-4157-b39f-ddc71fbab31d-ovnkube-config\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.339045 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-run-ovn\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.339058 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-slash\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.339077 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-etc-openvswitch\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.339091 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aad1abb7-073f-4157-b39f-ddc71fbab31d-env-overrides\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.339108 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-run-openvswitch\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.339121 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-run-ovn-kubernetes\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.339137 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aad1abb7-073f-4157-b39f-ddc71fbab31d-ovn-node-metrics-cert\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.339151 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aad1abb7-073f-4157-b39f-ddc71fbab31d-ovnkube-script-lib\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.339166 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-run-netns\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.339179 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-var-lib-openvswitch\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.339199 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.339300 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-cni-bin\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.339341 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-node-log\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.352236 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.365058 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.379791 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.403454 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.416923 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.432280 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440150 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440186 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-cni-bin\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440203 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-node-log\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440221 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-kubelet\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440238 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-systemd-units\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440253 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpbcq\" (UniqueName: \"kubernetes.io/projected/aad1abb7-073f-4157-b39f-ddc71fbab31d-kube-api-access-kpbcq\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440302 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-cni-netd\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440323 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-run-systemd\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440336 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-log-socket\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440349 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aad1abb7-073f-4157-b39f-ddc71fbab31d-ovnkube-config\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440364 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-run-ovn\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440384 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-slash\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440398 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-etc-openvswitch\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440413 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aad1abb7-073f-4157-b39f-ddc71fbab31d-env-overrides\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440427 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-run-openvswitch\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440441 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-run-ovn-kubernetes\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440457 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aad1abb7-073f-4157-b39f-ddc71fbab31d-ovn-node-metrics-cert\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440471 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aad1abb7-073f-4157-b39f-ddc71fbab31d-ovnkube-script-lib\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440486 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-run-netns\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440500 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-var-lib-openvswitch\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440561 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-var-lib-openvswitch\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440602 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440624 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-cni-bin\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440660 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-node-log\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440682 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-kubelet\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440701 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-systemd-units\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440939 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-cni-netd\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440972 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-run-systemd\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.440989 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-log-socket\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.441130 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-run-openvswitch\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.441194 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-run-ovn\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.441228 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-slash\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.441262 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-etc-openvswitch\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.441805 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aad1abb7-073f-4157-b39f-ddc71fbab31d-env-overrides\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.442431 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-run-ovn-kubernetes\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.442483 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-run-netns\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.443260 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aad1abb7-073f-4157-b39f-ddc71fbab31d-ovnkube-script-lib\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.446911 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aad1abb7-073f-4157-b39f-ddc71fbab31d-ovnkube-config\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.449075 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aad1abb7-073f-4157-b39f-ddc71fbab31d-ovn-node-metrics-cert\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.457262 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.460659 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpbcq\" (UniqueName: \"kubernetes.io/projected/aad1abb7-073f-4157-b39f-ddc71fbab31d-kube-api-access-kpbcq\") pod \"ovnkube-node-jbsj6\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.465808 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:17 crc kubenswrapper[4624]: E1008 14:23:17.465930 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.466922 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:17 crc kubenswrapper[4624]: E1008 14:23:17.467022 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.467073 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:17 crc kubenswrapper[4624]: E1008 14:23:17.467116 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.472791 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.473476 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.474995 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.475907 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.477126 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.477824 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.478533 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.479787 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.480573 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.481848 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.483017 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.483591 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.483994 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.485156 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.485923 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.487062 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.487766 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.491691 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.492273 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.493767 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.494709 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.495363 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.496630 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.497178 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.498411 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.499028 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.500085 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.500287 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.501199 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.501844 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.503865 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.504471 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.505811 4624 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.505939 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.509895 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.510734 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.511308 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.513952 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.514786 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.516875 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.517761 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.519420 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.520199 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.523315 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.524296 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.525884 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.526578 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.526699 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.527760 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.528529 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.529861 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.530528 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.531687 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.532248 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.532925 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.534140 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.534718 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.626952 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5"} Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.626992 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e"} Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.627000 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"81d363e51011e6b72b9b1dc4a41a7d47335167173514bd9d83979909b6bb2d32"} Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.628517 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" event={"ID":"6e2555ab-0f5c-452a-a4c3-273f6a327e06","Type":"ContainerStarted","Data":"75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001"} Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.628540 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" event={"ID":"6e2555ab-0f5c-452a-a4c3-273f6a327e06","Type":"ContainerStarted","Data":"4b9def20cb72111e34fd61ab6273f47701c5cc1a1db65bbc2c236c172f7e392d"} Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.629900 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-47hzf" event={"ID":"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82","Type":"ContainerStarted","Data":"c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b"} Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.629950 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-47hzf" event={"ID":"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82","Type":"ContainerStarted","Data":"f65da0096bafb66e23f044b7ba7ee008a3e4ccdc7099ed579ae8e2335397030b"} Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.641901 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.656243 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.668133 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.686732 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.694527 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.702906 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.719439 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.732297 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.750699 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.767557 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.784184 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.796489 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.811003 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.827551 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.839978 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.857513 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.866872 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.878803 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.917671 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.961161 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:17 crc kubenswrapper[4624]: I1008 14:23:17.998702 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:17Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.042317 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.078677 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.119903 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.162339 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.635431 4624 generic.go:334] "Generic (PLEG): container finished" podID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerID="d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d" exitCode=0 Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.635715 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerDied","Data":"d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d"} Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.635974 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerStarted","Data":"95472a9143a71d686450277d854da6da7cf6aaae7d0fb9f91dfc4c61728bf5d2"} Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.639041 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a"} Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.642730 4624 generic.go:334] "Generic (PLEG): container finished" podID="6e2555ab-0f5c-452a-a4c3-273f6a327e06" containerID="75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001" exitCode=0 Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.642840 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" event={"ID":"6e2555ab-0f5c-452a-a4c3-273f6a327e06","Type":"ContainerDied","Data":"75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001"} Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.659192 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.675883 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.690076 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.702765 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.716369 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.727129 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.737814 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.752901 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.767254 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.781128 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.794338 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.803184 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.816829 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.829251 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.842575 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.853024 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.864888 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.876770 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.920151 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:18 crc kubenswrapper[4624]: I1008 14:23:18.957947 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:18Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.008429 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.036511 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.055798 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.056012 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:19 crc kubenswrapper[4624]: E1008 14:23:19.056025 4624 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.056035 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:19 crc kubenswrapper[4624]: E1008 14:23:19.056062 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:23:23.056032679 +0000 UTC m=+28.206967806 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:23:19 crc kubenswrapper[4624]: E1008 14:23:19.056105 4624 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 14:23:19 crc kubenswrapper[4624]: E1008 14:23:19.056132 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:23.056124091 +0000 UTC m=+28.207059168 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 14:23:19 crc kubenswrapper[4624]: E1008 14:23:19.056147 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:23.056140852 +0000 UTC m=+28.207075919 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.079076 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.120988 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.143579 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-trdbp"] Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.143986 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-trdbp" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.150053 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.157009 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.157074 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:19 crc kubenswrapper[4624]: E1008 14:23:19.157197 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 14:23:19 crc kubenswrapper[4624]: E1008 14:23:19.157216 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 14:23:19 crc kubenswrapper[4624]: E1008 14:23:19.157228 4624 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:19 crc kubenswrapper[4624]: E1008 14:23:19.157269 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:23.157253546 +0000 UTC m=+28.308188633 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:19 crc kubenswrapper[4624]: E1008 14:23:19.157572 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 14:23:19 crc kubenswrapper[4624]: E1008 14:23:19.157591 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 14:23:19 crc kubenswrapper[4624]: E1008 14:23:19.157601 4624 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:19 crc kubenswrapper[4624]: E1008 14:23:19.157646 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:23.157619705 +0000 UTC m=+28.308554792 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.169984 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.189660 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.209550 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.240954 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.258485 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9151abb1-b674-4a91-8b8b-00b1cdbb5bf1-serviceca\") pod \"node-ca-trdbp\" (UID: \"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\") " pod="openshift-image-registry/node-ca-trdbp" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.258564 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc5l7\" (UniqueName: \"kubernetes.io/projected/9151abb1-b674-4a91-8b8b-00b1cdbb5bf1-kube-api-access-zc5l7\") pod \"node-ca-trdbp\" (UID: \"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\") " pod="openshift-image-registry/node-ca-trdbp" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.258658 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9151abb1-b674-4a91-8b8b-00b1cdbb5bf1-host\") pod \"node-ca-trdbp\" (UID: \"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\") " pod="openshift-image-registry/node-ca-trdbp" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.280470 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.317595 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.359905 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9151abb1-b674-4a91-8b8b-00b1cdbb5bf1-serviceca\") pod \"node-ca-trdbp\" (UID: \"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\") " pod="openshift-image-registry/node-ca-trdbp" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.359944 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc5l7\" (UniqueName: \"kubernetes.io/projected/9151abb1-b674-4a91-8b8b-00b1cdbb5bf1-kube-api-access-zc5l7\") pod \"node-ca-trdbp\" (UID: \"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\") " pod="openshift-image-registry/node-ca-trdbp" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.359983 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9151abb1-b674-4a91-8b8b-00b1cdbb5bf1-host\") pod \"node-ca-trdbp\" (UID: \"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\") " pod="openshift-image-registry/node-ca-trdbp" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.360044 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9151abb1-b674-4a91-8b8b-00b1cdbb5bf1-host\") pod \"node-ca-trdbp\" (UID: \"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\") " pod="openshift-image-registry/node-ca-trdbp" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.360930 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9151abb1-b674-4a91-8b8b-00b1cdbb5bf1-serviceca\") pod \"node-ca-trdbp\" (UID: \"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\") " pod="openshift-image-registry/node-ca-trdbp" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.363939 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.387801 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc5l7\" (UniqueName: \"kubernetes.io/projected/9151abb1-b674-4a91-8b8b-00b1cdbb5bf1-kube-api-access-zc5l7\") pod \"node-ca-trdbp\" (UID: \"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\") " pod="openshift-image-registry/node-ca-trdbp" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.416543 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.457126 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.459266 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-trdbp" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.464824 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:19 crc kubenswrapper[4624]: E1008 14:23:19.465064 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.464888 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.464844 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:19 crc kubenswrapper[4624]: E1008 14:23:19.465670 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:19 crc kubenswrapper[4624]: E1008 14:23:19.465817 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:19 crc kubenswrapper[4624]: W1008 14:23:19.470198 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9151abb1_b674_4a91_8b8b_00b1cdbb5bf1.slice/crio-ba46bef39225230cac2c70696ef0985a73362a85fb7015895172886dab766f91 WatchSource:0}: Error finding container ba46bef39225230cac2c70696ef0985a73362a85fb7015895172886dab766f91: Status 404 returned error can't find the container with id ba46bef39225230cac2c70696ef0985a73362a85fb7015895172886dab766f91 Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.497429 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.538554 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.579457 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.619861 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.650116 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-trdbp" event={"ID":"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1","Type":"ContainerStarted","Data":"ba46bef39225230cac2c70696ef0985a73362a85fb7015895172886dab766f91"} Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.652293 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" event={"ID":"6e2555ab-0f5c-452a-a4c3-273f6a327e06","Type":"ContainerStarted","Data":"263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad"} Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.655731 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerStarted","Data":"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b"} Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.655777 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerStarted","Data":"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f"} Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.655795 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerStarted","Data":"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec"} Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.655804 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerStarted","Data":"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b"} Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.664414 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.699968 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.739257 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.779808 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.821802 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.856721 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.899108 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.943608 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:19 crc kubenswrapper[4624]: I1008 14:23:19.979698 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:19Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.018595 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.057933 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.100436 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.141675 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.177845 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.215324 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.274188 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.401072 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.405028 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.408702 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.414206 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.429104 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.442503 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.457231 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.477848 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.520500 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.556533 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.597094 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.636531 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.662170 4624 generic.go:334] "Generic (PLEG): container finished" podID="6e2555ab-0f5c-452a-a4c3-273f6a327e06" containerID="263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad" exitCode=0 Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.662247 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" event={"ID":"6e2555ab-0f5c-452a-a4c3-273f6a327e06","Type":"ContainerDied","Data":"263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad"} Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.666450 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerStarted","Data":"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e"} Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.666505 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerStarted","Data":"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33"} Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.667782 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-trdbp" event={"ID":"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1","Type":"ContainerStarted","Data":"c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3"} Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.678040 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.718714 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.748817 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.749462 4624 scope.go:117] "RemoveContainer" containerID="469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49" Oct 08 14:23:20 crc kubenswrapper[4624]: E1008 14:23:20.749624 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.766223 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.796250 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.836781 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.878750 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.919971 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.962713 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:20 crc kubenswrapper[4624]: I1008 14:23:20.999199 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.040308 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.079943 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.116332 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.157582 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.197369 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.237979 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.277447 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.324118 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.356673 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.465676 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.465718 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.465685 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:21 crc kubenswrapper[4624]: E1008 14:23:21.465849 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:21 crc kubenswrapper[4624]: E1008 14:23:21.465956 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:21 crc kubenswrapper[4624]: E1008 14:23:21.466050 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.488278 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.490369 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.490395 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.490403 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.490526 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.496066 4624 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.496349 4624 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.497296 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.497325 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.497335 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.497349 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.497359 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:21Z","lastTransitionTime":"2025-10-08T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:21 crc kubenswrapper[4624]: E1008 14:23:21.508710 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.511661 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.511686 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.511694 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.511706 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.511715 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:21Z","lastTransitionTime":"2025-10-08T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:21 crc kubenswrapper[4624]: E1008 14:23:21.523591 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.527028 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.527063 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.527073 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.527086 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.527094 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:21Z","lastTransitionTime":"2025-10-08T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:21 crc kubenswrapper[4624]: E1008 14:23:21.537456 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.540042 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.540069 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.540077 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.540090 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.540099 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:21Z","lastTransitionTime":"2025-10-08T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:21 crc kubenswrapper[4624]: E1008 14:23:21.550231 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.552795 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.552827 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.552837 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.552852 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.552862 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:21Z","lastTransitionTime":"2025-10-08T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:21 crc kubenswrapper[4624]: E1008 14:23:21.564415 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: E1008 14:23:21.564526 4624 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.565923 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.565948 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.565956 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.565968 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.565977 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:21Z","lastTransitionTime":"2025-10-08T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.668539 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.668576 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.668585 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.668600 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.668610 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:21Z","lastTransitionTime":"2025-10-08T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.673052 4624 generic.go:334] "Generic (PLEG): container finished" podID="6e2555ab-0f5c-452a-a4c3-273f6a327e06" containerID="0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6" exitCode=0 Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.673120 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" event={"ID":"6e2555ab-0f5c-452a-a4c3-273f6a327e06","Type":"ContainerDied","Data":"0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6"} Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.684665 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.701264 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.714171 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.729691 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.738962 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.751158 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.762204 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.770799 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.770830 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.770838 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.770851 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.770860 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:21Z","lastTransitionTime":"2025-10-08T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.771893 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.782943 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.797258 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.838168 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.874997 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.875050 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.875061 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.875078 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.875089 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:21Z","lastTransitionTime":"2025-10-08T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.883553 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.921580 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.956283 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.978015 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.978067 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.978080 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.978098 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:21 crc kubenswrapper[4624]: I1008 14:23:21.978111 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:21Z","lastTransitionTime":"2025-10-08T14:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.079951 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.079985 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.079995 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.080008 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.080020 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:22Z","lastTransitionTime":"2025-10-08T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.182535 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.182568 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.182592 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.182606 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.182615 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:22Z","lastTransitionTime":"2025-10-08T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.284799 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.284839 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.284847 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.284863 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.284873 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:22Z","lastTransitionTime":"2025-10-08T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.386859 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.386886 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.386895 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.386906 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.386915 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:22Z","lastTransitionTime":"2025-10-08T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.488684 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.488736 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.488746 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.488761 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.488770 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:22Z","lastTransitionTime":"2025-10-08T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.590787 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.590830 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.590840 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.590853 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.590862 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:22Z","lastTransitionTime":"2025-10-08T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.678467 4624 generic.go:334] "Generic (PLEG): container finished" podID="6e2555ab-0f5c-452a-a4c3-273f6a327e06" containerID="6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685" exitCode=0 Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.678542 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" event={"ID":"6e2555ab-0f5c-452a-a4c3-273f6a327e06","Type":"ContainerDied","Data":"6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685"} Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.684423 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerStarted","Data":"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c"} Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.693821 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.693850 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.693860 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.693874 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.693883 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:22Z","lastTransitionTime":"2025-10-08T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.695671 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.709749 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.724413 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.744432 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.753810 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.764438 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.776968 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.788964 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.795525 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.795589 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.795605 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.795757 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.795851 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:22Z","lastTransitionTime":"2025-10-08T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.806886 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.819742 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.836376 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.849039 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.858220 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.870604 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.897689 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.897725 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.897736 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.897752 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:22 crc kubenswrapper[4624]: I1008 14:23:22.897762 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:22Z","lastTransitionTime":"2025-10-08T14:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.000120 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.000162 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.000172 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.000187 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.000198 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:23Z","lastTransitionTime":"2025-10-08T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.095875 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:23:23 crc kubenswrapper[4624]: E1008 14:23:23.095997 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:23:31.09597533 +0000 UTC m=+36.246910407 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.096047 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.096079 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:23 crc kubenswrapper[4624]: E1008 14:23:23.096195 4624 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 14:23:23 crc kubenswrapper[4624]: E1008 14:23:23.096216 4624 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 14:23:23 crc kubenswrapper[4624]: E1008 14:23:23.096246 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:31.096239407 +0000 UTC m=+36.247174484 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 14:23:23 crc kubenswrapper[4624]: E1008 14:23:23.096258 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:31.096252727 +0000 UTC m=+36.247187804 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.105891 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.105928 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.105937 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.105952 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.105960 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:23Z","lastTransitionTime":"2025-10-08T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.196986 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.197057 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:23 crc kubenswrapper[4624]: E1008 14:23:23.197190 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 14:23:23 crc kubenswrapper[4624]: E1008 14:23:23.197197 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 14:23:23 crc kubenswrapper[4624]: E1008 14:23:23.197234 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 14:23:23 crc kubenswrapper[4624]: E1008 14:23:23.197247 4624 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:23 crc kubenswrapper[4624]: E1008 14:23:23.197295 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:31.197279729 +0000 UTC m=+36.348214806 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:23 crc kubenswrapper[4624]: E1008 14:23:23.197209 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 14:23:23 crc kubenswrapper[4624]: E1008 14:23:23.197322 4624 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:23 crc kubenswrapper[4624]: E1008 14:23:23.197372 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:31.197355891 +0000 UTC m=+36.348291008 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.208228 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.208269 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.208281 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.208296 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.208304 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:23Z","lastTransitionTime":"2025-10-08T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.310889 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.310931 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.310942 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.310958 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.310969 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:23Z","lastTransitionTime":"2025-10-08T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.413311 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.413351 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.413361 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.413377 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.413388 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:23Z","lastTransitionTime":"2025-10-08T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.465399 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.465467 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.465537 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:23 crc kubenswrapper[4624]: E1008 14:23:23.465534 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:23 crc kubenswrapper[4624]: E1008 14:23:23.465622 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:23 crc kubenswrapper[4624]: E1008 14:23:23.465717 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.516481 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.516522 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.516532 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.516554 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.516566 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:23Z","lastTransitionTime":"2025-10-08T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.619019 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.619054 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.619064 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.619079 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.619090 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:23Z","lastTransitionTime":"2025-10-08T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.690844 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" event={"ID":"6e2555ab-0f5c-452a-a4c3-273f6a327e06","Type":"ContainerStarted","Data":"aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29"} Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.705113 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.719382 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.720882 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.720918 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.720929 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.720950 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.720960 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:23Z","lastTransitionTime":"2025-10-08T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.731615 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.744788 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.755135 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.768304 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.779549 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.788615 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.801021 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.812477 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.823870 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.823918 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.823929 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.823944 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.823955 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:23Z","lastTransitionTime":"2025-10-08T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.826062 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.837699 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.856132 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.866910 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.926028 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.926072 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.926085 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.926102 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:23 crc kubenswrapper[4624]: I1008 14:23:23.926115 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:23Z","lastTransitionTime":"2025-10-08T14:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.028698 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.028729 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.028738 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.028753 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.028764 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:24Z","lastTransitionTime":"2025-10-08T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.130949 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.130988 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.131000 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.131017 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.131030 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:24Z","lastTransitionTime":"2025-10-08T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.232817 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.233171 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.233180 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.233194 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.233203 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:24Z","lastTransitionTime":"2025-10-08T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.335415 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.335470 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.335488 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.335509 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.335520 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:24Z","lastTransitionTime":"2025-10-08T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.437822 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.437859 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.437871 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.437887 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.437900 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:24Z","lastTransitionTime":"2025-10-08T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.540526 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.540586 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.540603 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.540627 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.540654 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:24Z","lastTransitionTime":"2025-10-08T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.642268 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.642309 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.642318 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.642333 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.642342 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:24Z","lastTransitionTime":"2025-10-08T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.697951 4624 generic.go:334] "Generic (PLEG): container finished" podID="6e2555ab-0f5c-452a-a4c3-273f6a327e06" containerID="aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29" exitCode=0 Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.698016 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" event={"ID":"6e2555ab-0f5c-452a-a4c3-273f6a327e06","Type":"ContainerDied","Data":"aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29"} Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.707097 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerStarted","Data":"c18e95bb34516a6828203a1da3f40b9aa647bee5edf01472e27a5f911385f6cd"} Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.707385 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.713581 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.725568 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.734205 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.744389 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.744414 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.744422 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.744434 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.744442 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:24Z","lastTransitionTime":"2025-10-08T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.750716 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.754896 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.760904 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.771563 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.785269 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.796231 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.811172 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.824355 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.837486 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.846493 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.846535 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.846545 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.846561 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.846572 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:24Z","lastTransitionTime":"2025-10-08T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.852559 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.864828 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.880098 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.891132 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.903251 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.913683 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.926078 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.939305 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.948992 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.949061 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.949103 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.949120 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.949131 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:24Z","lastTransitionTime":"2025-10-08T14:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.953810 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.966263 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.976988 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.987327 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:24 crc kubenswrapper[4624]: I1008 14:23:24.997168 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:24Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.007505 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.018316 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.034757 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18e95bb34516a6828203a1da3f40b9aa647bee5edf01472e27a5f911385f6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.043548 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.050922 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.050949 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.050958 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.050970 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.050979 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:25Z","lastTransitionTime":"2025-10-08T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.153466 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.154055 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.154075 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.154105 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.154116 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:25Z","lastTransitionTime":"2025-10-08T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.256722 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.256759 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.256769 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.256786 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.256796 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:25Z","lastTransitionTime":"2025-10-08T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.359122 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.359168 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.359180 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.359196 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.359206 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:25Z","lastTransitionTime":"2025-10-08T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.461816 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.461857 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.461867 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.461884 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.461896 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:25Z","lastTransitionTime":"2025-10-08T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.465064 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.465074 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.465154 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:25 crc kubenswrapper[4624]: E1008 14:23:25.465247 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:25 crc kubenswrapper[4624]: E1008 14:23:25.465514 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:25 crc kubenswrapper[4624]: E1008 14:23:25.465568 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.480107 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.491318 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.502346 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.515030 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.526949 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.540168 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.553461 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.564679 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.564717 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.564728 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.564745 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.564759 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:25Z","lastTransitionTime":"2025-10-08T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.565528 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.577301 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.587781 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.599110 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.610849 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.627702 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18e95bb34516a6828203a1da3f40b9aa647bee5edf01472e27a5f911385f6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.637291 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.666754 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.666789 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.666800 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.666816 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.666828 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:25Z","lastTransitionTime":"2025-10-08T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.711913 4624 generic.go:334] "Generic (PLEG): container finished" podID="6e2555ab-0f5c-452a-a4c3-273f6a327e06" containerID="ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d" exitCode=0 Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.712038 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.712761 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" event={"ID":"6e2555ab-0f5c-452a-a4c3-273f6a327e06","Type":"ContainerDied","Data":"ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d"} Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.712793 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.725136 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.737877 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.743971 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.756810 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.770229 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.770273 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.770284 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.770299 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.770309 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:25Z","lastTransitionTime":"2025-10-08T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.771616 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.784470 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.799484 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.811780 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.822315 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.838621 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.852431 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.866575 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.872046 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.872071 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.872080 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.872092 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.872107 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:25Z","lastTransitionTime":"2025-10-08T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.880383 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.897674 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18e95bb34516a6828203a1da3f40b9aa647bee5edf01472e27a5f911385f6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.907340 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.918271 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.928373 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.940973 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.951933 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.960323 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.972288 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.973779 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.973802 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.973810 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.973822 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.973830 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:25Z","lastTransitionTime":"2025-10-08T14:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.982104 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:25 crc kubenswrapper[4624]: I1008 14:23:25.993623 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.004539 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.014985 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.031462 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18e95bb34516a6828203a1da3f40b9aa647bee5edf01472e27a5f911385f6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.042021 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.051859 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.062932 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.076383 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.076416 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.076424 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.076436 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.076445 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:26Z","lastTransitionTime":"2025-10-08T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.178577 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.178611 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.178619 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.178673 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.178685 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:26Z","lastTransitionTime":"2025-10-08T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.281166 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.281202 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.281220 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.281236 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.281248 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:26Z","lastTransitionTime":"2025-10-08T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.384218 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.384262 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.384271 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.384289 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.384306 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:26Z","lastTransitionTime":"2025-10-08T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.486926 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.486993 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.487002 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.487020 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.487030 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:26Z","lastTransitionTime":"2025-10-08T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.589295 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.589333 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.589346 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.589365 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.589376 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:26Z","lastTransitionTime":"2025-10-08T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.691783 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.691811 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.691819 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.691831 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.691839 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:26Z","lastTransitionTime":"2025-10-08T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.717091 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.717737 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" event={"ID":"6e2555ab-0f5c-452a-a4c3-273f6a327e06","Type":"ContainerStarted","Data":"c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df"} Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.729843 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.747004 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.760047 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.773546 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.786164 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.793855 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.793894 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.793903 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.793919 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.793929 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:26Z","lastTransitionTime":"2025-10-08T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.803029 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.816264 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.826415 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.841227 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.853758 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.866801 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.883573 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.896206 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.896443 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.896519 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.896600 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.896703 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:26Z","lastTransitionTime":"2025-10-08T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.905656 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18e95bb34516a6828203a1da3f40b9aa647bee5edf01472e27a5f911385f6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.918226 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:26Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.999419 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.999455 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.999464 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.999478 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:26 crc kubenswrapper[4624]: I1008 14:23:26.999489 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:26Z","lastTransitionTime":"2025-10-08T14:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.101690 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.101720 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.101728 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.101741 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.101750 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:27Z","lastTransitionTime":"2025-10-08T14:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.203859 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.203896 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.203906 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.203923 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.203935 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:27Z","lastTransitionTime":"2025-10-08T14:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.306490 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.306528 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.306539 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.306555 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.306567 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:27Z","lastTransitionTime":"2025-10-08T14:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.408872 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.408920 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.408931 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.408946 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.408957 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:27Z","lastTransitionTime":"2025-10-08T14:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.465344 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.465381 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.465356 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:27 crc kubenswrapper[4624]: E1008 14:23:27.465482 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:27 crc kubenswrapper[4624]: E1008 14:23:27.465566 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:27 crc kubenswrapper[4624]: E1008 14:23:27.465681 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.510696 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.510722 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.510730 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.510746 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.510756 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:27Z","lastTransitionTime":"2025-10-08T14:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.613195 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.613223 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.613232 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.613245 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.613257 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:27Z","lastTransitionTime":"2025-10-08T14:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.714547 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.714573 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.714581 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.714593 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.714602 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:27Z","lastTransitionTime":"2025-10-08T14:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.721167 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovnkube-controller/0.log" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.723305 4624 generic.go:334] "Generic (PLEG): container finished" podID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerID="c18e95bb34516a6828203a1da3f40b9aa647bee5edf01472e27a5f911385f6cd" exitCode=1 Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.723343 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerDied","Data":"c18e95bb34516a6828203a1da3f40b9aa647bee5edf01472e27a5f911385f6cd"} Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.725004 4624 scope.go:117] "RemoveContainer" containerID="c18e95bb34516a6828203a1da3f40b9aa647bee5edf01472e27a5f911385f6cd" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.737310 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:27Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.748866 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:27Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.760543 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:27Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.772058 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:27Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.784256 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:27Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.794481 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:27Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.814512 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18e95bb34516a6828203a1da3f40b9aa647bee5edf01472e27a5f911385f6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c18e95bb34516a6828203a1da3f40b9aa647bee5edf01472e27a5f911385f6cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:27Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:27.464664 5837 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 14:23:27.464684 5837 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 14:23:27.464728 5837 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1008 14:23:27.464741 5837 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 14:23:27.464749 5837 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 14:23:27.464761 5837 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1008 14:23:27.464767 5837 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 14:23:27.464781 5837 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 14:23:27.464788 5837 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 14:23:27.464812 5837 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 14:23:27.464835 5837 factory.go:656] Stopping watch factory\\\\nI1008 14:23:27.464847 5837 ovnkube.go:599] Stopped ovnkube\\\\nI1008 14:23:27.464852 5837 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 14:23:27.464868 5837 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 14:23:27.464868 5837 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 14:23:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:27Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.816364 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.816411 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.816431 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.816447 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.816458 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:27Z","lastTransitionTime":"2025-10-08T14:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.825498 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:27Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.838485 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:27Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.851691 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:27Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.866857 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:27Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.880492 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:27Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.891875 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:27Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.905975 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:27Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.920928 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.920982 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.920993 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.921010 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:27 crc kubenswrapper[4624]: I1008 14:23:27.921023 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:27Z","lastTransitionTime":"2025-10-08T14:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.024053 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.024104 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.024115 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.024131 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.024142 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:28Z","lastTransitionTime":"2025-10-08T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.126333 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.126375 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.126386 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.126403 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.126413 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:28Z","lastTransitionTime":"2025-10-08T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.228513 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.228573 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.228598 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.228619 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.228645 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:28Z","lastTransitionTime":"2025-10-08T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.330813 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.330845 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.330854 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.330872 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.330881 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:28Z","lastTransitionTime":"2025-10-08T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.432961 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.433000 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.433010 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.433024 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.433040 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:28Z","lastTransitionTime":"2025-10-08T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.536889 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.536930 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.536947 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.536964 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.536975 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:28Z","lastTransitionTime":"2025-10-08T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.639543 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.639582 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.639593 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.639609 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.639621 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:28Z","lastTransitionTime":"2025-10-08T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.654827 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5"] Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.655274 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.656973 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.659172 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.673084 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.684877 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.696213 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.715382 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18e95bb34516a6828203a1da3f40b9aa647bee5edf01472e27a5f911385f6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c18e95bb34516a6828203a1da3f40b9aa647bee5edf01472e27a5f911385f6cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:27Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:27.464664 5837 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 14:23:27.464684 5837 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 14:23:27.464728 5837 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1008 14:23:27.464741 5837 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 14:23:27.464749 5837 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 14:23:27.464761 5837 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1008 14:23:27.464767 5837 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 14:23:27.464781 5837 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 14:23:27.464788 5837 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 14:23:27.464812 5837 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 14:23:27.464835 5837 factory.go:656] Stopping watch factory\\\\nI1008 14:23:27.464847 5837 ovnkube.go:599] Stopped ovnkube\\\\nI1008 14:23:27.464852 5837 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 14:23:27.464868 5837 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 14:23:27.464868 5837 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 14:23:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.724271 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.728109 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovnkube-controller/1.log" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.728752 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovnkube-controller/0.log" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.731100 4624 generic.go:334] "Generic (PLEG): container finished" podID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerID="1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e" exitCode=1 Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.731141 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerDied","Data":"1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e"} Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.731177 4624 scope.go:117] "RemoveContainer" containerID="c18e95bb34516a6828203a1da3f40b9aa647bee5edf01472e27a5f911385f6cd" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.731931 4624 scope.go:117] "RemoveContainer" containerID="1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e" Oct 08 14:23:28 crc kubenswrapper[4624]: E1008 14:23:28.732206 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.738993 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.741565 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.741607 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.741620 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.741654 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.741669 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:28Z","lastTransitionTime":"2025-10-08T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.750178 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.762137 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.774173 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.785746 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.790836 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5306007c-dc93-41eb-8623-19adb6234d92-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7mdh5\" (UID: \"5306007c-dc93-41eb-8623-19adb6234d92\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.790895 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gjdp\" (UniqueName: \"kubernetes.io/projected/5306007c-dc93-41eb-8623-19adb6234d92-kube-api-access-8gjdp\") pod \"ovnkube-control-plane-749d76644c-7mdh5\" (UID: \"5306007c-dc93-41eb-8623-19adb6234d92\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.790917 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5306007c-dc93-41eb-8623-19adb6234d92-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7mdh5\" (UID: \"5306007c-dc93-41eb-8623-19adb6234d92\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.790933 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5306007c-dc93-41eb-8623-19adb6234d92-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7mdh5\" (UID: \"5306007c-dc93-41eb-8623-19adb6234d92\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.794813 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.806950 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.817106 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.825948 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.836289 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.843767 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.843800 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.843811 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.843827 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.843841 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:28Z","lastTransitionTime":"2025-10-08T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.848658 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.857737 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.870664 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.881975 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.891376 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.891546 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5306007c-dc93-41eb-8623-19adb6234d92-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7mdh5\" (UID: \"5306007c-dc93-41eb-8623-19adb6234d92\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.891612 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gjdp\" (UniqueName: \"kubernetes.io/projected/5306007c-dc93-41eb-8623-19adb6234d92-kube-api-access-8gjdp\") pod \"ovnkube-control-plane-749d76644c-7mdh5\" (UID: \"5306007c-dc93-41eb-8623-19adb6234d92\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.891630 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5306007c-dc93-41eb-8623-19adb6234d92-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7mdh5\" (UID: \"5306007c-dc93-41eb-8623-19adb6234d92\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.891666 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5306007c-dc93-41eb-8623-19adb6234d92-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7mdh5\" (UID: \"5306007c-dc93-41eb-8623-19adb6234d92\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.892841 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5306007c-dc93-41eb-8623-19adb6234d92-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7mdh5\" (UID: \"5306007c-dc93-41eb-8623-19adb6234d92\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.893011 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5306007c-dc93-41eb-8623-19adb6234d92-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7mdh5\" (UID: \"5306007c-dc93-41eb-8623-19adb6234d92\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.896166 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5306007c-dc93-41eb-8623-19adb6234d92-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7mdh5\" (UID: \"5306007c-dc93-41eb-8623-19adb6234d92\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.910465 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gjdp\" (UniqueName: \"kubernetes.io/projected/5306007c-dc93-41eb-8623-19adb6234d92-kube-api-access-8gjdp\") pod \"ovnkube-control-plane-749d76644c-7mdh5\" (UID: \"5306007c-dc93-41eb-8623-19adb6234d92\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.912378 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.925296 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.936771 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.949654 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.949701 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.949713 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.949838 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.949903 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:28Z","lastTransitionTime":"2025-10-08T14:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.951088 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.965528 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.968727 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c18e95bb34516a6828203a1da3f40b9aa647bee5edf01472e27a5f911385f6cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:27Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:27.464664 5837 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 14:23:27.464684 5837 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 14:23:27.464728 5837 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1008 14:23:27.464741 5837 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 14:23:27.464749 5837 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 14:23:27.464761 5837 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1008 14:23:27.464767 5837 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 14:23:27.464781 5837 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 14:23:27.464788 5837 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 14:23:27.464812 5837 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 14:23:27.464835 5837 factory.go:656] Stopping watch factory\\\\nI1008 14:23:27.464847 5837 ovnkube.go:599] Stopped ovnkube\\\\nI1008 14:23:27.464852 5837 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 14:23:27.464868 5837 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 14:23:27.464868 5837 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 14:23:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"message\\\":\\\": failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z]\\\\nI1008 14:23:28.455785 6002 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1008 14:23:28.455785 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455788 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1008 14:23:28.455729 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-zfrv8 after 0 failed attempt(s)\\\\nI1008 14:23:28.455791 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1008 14:23:28.455793 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455795 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-targ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.978367 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:28 crc kubenswrapper[4624]: W1008 14:23:28.978623 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5306007c_dc93_41eb_8623_19adb6234d92.slice/crio-2579cf4aa4789b1369a69cb6fed72ff93c0bc843b4624eb113c9a33b2204d038 WatchSource:0}: Error finding container 2579cf4aa4789b1369a69cb6fed72ff93c0bc843b4624eb113c9a33b2204d038: Status 404 returned error can't find the container with id 2579cf4aa4789b1369a69cb6fed72ff93c0bc843b4624eb113c9a33b2204d038 Oct 08 14:23:28 crc kubenswrapper[4624]: I1008 14:23:28.991980 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.009794 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.025376 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.039001 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.055993 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.056022 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.056032 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.056046 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.056057 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:29Z","lastTransitionTime":"2025-10-08T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.157949 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.157980 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.157988 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.158002 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.158011 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:29Z","lastTransitionTime":"2025-10-08T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.260453 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.260477 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.260486 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.260500 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.260508 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:29Z","lastTransitionTime":"2025-10-08T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.362478 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.362522 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.362533 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.362552 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.362568 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:29Z","lastTransitionTime":"2025-10-08T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.465717 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.465819 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:29 crc kubenswrapper[4624]: E1008 14:23:29.465840 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.465918 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:29 crc kubenswrapper[4624]: E1008 14:23:29.465992 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:29 crc kubenswrapper[4624]: E1008 14:23:29.466073 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.469188 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.469236 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.469248 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.469267 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.469277 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:29Z","lastTransitionTime":"2025-10-08T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.572416 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.572461 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.572471 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.572485 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.572496 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:29Z","lastTransitionTime":"2025-10-08T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.675212 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.675263 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.675273 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.675301 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.675313 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:29Z","lastTransitionTime":"2025-10-08T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.737147 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" event={"ID":"5306007c-dc93-41eb-8623-19adb6234d92","Type":"ContainerStarted","Data":"b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2"} Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.737203 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" event={"ID":"5306007c-dc93-41eb-8623-19adb6234d92","Type":"ContainerStarted","Data":"ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff"} Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.737217 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" event={"ID":"5306007c-dc93-41eb-8623-19adb6234d92","Type":"ContainerStarted","Data":"2579cf4aa4789b1369a69cb6fed72ff93c0bc843b4624eb113c9a33b2204d038"} Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.740256 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovnkube-controller/1.log" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.743903 4624 scope.go:117] "RemoveContainer" containerID="1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e" Oct 08 14:23:29 crc kubenswrapper[4624]: E1008 14:23:29.744046 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.756407 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.768667 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.778209 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.778234 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.778243 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.778257 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.778266 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:29Z","lastTransitionTime":"2025-10-08T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.784990 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.799894 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.814701 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.834665 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.861427 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c18e95bb34516a6828203a1da3f40b9aa647bee5edf01472e27a5f911385f6cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:27Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:27.464664 5837 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 14:23:27.464684 5837 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1008 14:23:27.464728 5837 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1008 14:23:27.464741 5837 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 14:23:27.464749 5837 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 14:23:27.464761 5837 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1008 14:23:27.464767 5837 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 14:23:27.464781 5837 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 14:23:27.464788 5837 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 14:23:27.464812 5837 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 14:23:27.464835 5837 factory.go:656] Stopping watch factory\\\\nI1008 14:23:27.464847 5837 ovnkube.go:599] Stopped ovnkube\\\\nI1008 14:23:27.464852 5837 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 14:23:27.464868 5837 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 14:23:27.464868 5837 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 14:23:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"message\\\":\\\": failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z]\\\\nI1008 14:23:28.455785 6002 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1008 14:23:28.455785 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455788 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1008 14:23:28.455729 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-zfrv8 after 0 failed attempt(s)\\\\nI1008 14:23:28.455791 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1008 14:23:28.455793 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455795 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-targ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.873824 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.882865 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.882901 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.882910 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.882924 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.882932 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:29Z","lastTransitionTime":"2025-10-08T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.889530 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.901618 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.913170 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.923765 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.936104 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.948534 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.960804 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.974926 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.985127 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.985161 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.985170 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.985185 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.985195 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:29Z","lastTransitionTime":"2025-10-08T14:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:29 crc kubenswrapper[4624]: I1008 14:23:29.987472 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.001926 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:29Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.015889 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.024138 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.035895 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.047097 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.059004 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.070188 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.083457 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.087000 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.087030 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.087038 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.087051 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.087060 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:30Z","lastTransitionTime":"2025-10-08T14:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.100995 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"message\\\":\\\": failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z]\\\\nI1008 14:23:28.455785 6002 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1008 14:23:28.455785 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455788 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1008 14:23:28.455729 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-zfrv8 after 0 failed attempt(s)\\\\nI1008 14:23:28.455791 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1008 14:23:28.455793 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455795 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-targ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.110913 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.124137 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.129057 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-qrmz6"] Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.129574 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:30 crc kubenswrapper[4624]: E1008 14:23:30.129659 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.140049 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.152168 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.162746 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.171255 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.180951 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.189161 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.189193 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.189205 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.189222 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.189232 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:30Z","lastTransitionTime":"2025-10-08T14:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.192705 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.203259 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.213772 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.229654 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"message\\\":\\\": failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z]\\\\nI1008 14:23:28.455785 6002 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1008 14:23:28.455785 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455788 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1008 14:23:28.455729 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-zfrv8 after 0 failed attempt(s)\\\\nI1008 14:23:28.455791 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1008 14:23:28.455793 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455795 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-targ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.239406 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.251279 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.262088 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.274134 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.284109 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.290710 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.290744 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.290808 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.290827 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.290839 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:30Z","lastTransitionTime":"2025-10-08T14:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.292796 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8abf38af-8df3-49f9-9817-b4740d2a8b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrmz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.303757 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs\") pod \"network-metrics-daemon-qrmz6\" (UID: \"8abf38af-8df3-49f9-9817-b4740d2a8b4a\") " pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.303814 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqr9z\" (UniqueName: \"kubernetes.io/projected/8abf38af-8df3-49f9-9817-b4740d2a8b4a-kube-api-access-bqr9z\") pod \"network-metrics-daemon-qrmz6\" (UID: \"8abf38af-8df3-49f9-9817-b4740d2a8b4a\") " pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.303959 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.313784 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.326671 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:30Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.393197 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.393225 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.393234 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.393246 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.393254 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:30Z","lastTransitionTime":"2025-10-08T14:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.404841 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs\") pod \"network-metrics-daemon-qrmz6\" (UID: \"8abf38af-8df3-49f9-9817-b4740d2a8b4a\") " pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.404887 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqr9z\" (UniqueName: \"kubernetes.io/projected/8abf38af-8df3-49f9-9817-b4740d2a8b4a-kube-api-access-bqr9z\") pod \"network-metrics-daemon-qrmz6\" (UID: \"8abf38af-8df3-49f9-9817-b4740d2a8b4a\") " pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:30 crc kubenswrapper[4624]: E1008 14:23:30.405175 4624 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 14:23:30 crc kubenswrapper[4624]: E1008 14:23:30.405219 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs podName:8abf38af-8df3-49f9-9817-b4740d2a8b4a nodeName:}" failed. No retries permitted until 2025-10-08 14:23:30.905201317 +0000 UTC m=+36.056136394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs") pod "network-metrics-daemon-qrmz6" (UID: "8abf38af-8df3-49f9-9817-b4740d2a8b4a") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.419564 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqr9z\" (UniqueName: \"kubernetes.io/projected/8abf38af-8df3-49f9-9817-b4740d2a8b4a-kube-api-access-bqr9z\") pod \"network-metrics-daemon-qrmz6\" (UID: \"8abf38af-8df3-49f9-9817-b4740d2a8b4a\") " pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.498871 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.498922 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.498935 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.498951 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.498964 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:30Z","lastTransitionTime":"2025-10-08T14:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.601113 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.601154 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.601166 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.601191 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.601202 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:30Z","lastTransitionTime":"2025-10-08T14:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.705077 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.705106 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.705114 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.705127 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.705135 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:30Z","lastTransitionTime":"2025-10-08T14:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.807224 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.807267 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.807278 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.807292 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.807302 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:30Z","lastTransitionTime":"2025-10-08T14:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.909001 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.909037 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.909049 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.909065 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.909078 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:30Z","lastTransitionTime":"2025-10-08T14:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:30 crc kubenswrapper[4624]: I1008 14:23:30.909205 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs\") pod \"network-metrics-daemon-qrmz6\" (UID: \"8abf38af-8df3-49f9-9817-b4740d2a8b4a\") " pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:30 crc kubenswrapper[4624]: E1008 14:23:30.909305 4624 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 14:23:30 crc kubenswrapper[4624]: E1008 14:23:30.909353 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs podName:8abf38af-8df3-49f9-9817-b4740d2a8b4a nodeName:}" failed. No retries permitted until 2025-10-08 14:23:31.909340289 +0000 UTC m=+37.060275366 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs") pod "network-metrics-daemon-qrmz6" (UID: "8abf38af-8df3-49f9-9817-b4740d2a8b4a") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.010933 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.010962 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.010973 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.010988 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.010998 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:31Z","lastTransitionTime":"2025-10-08T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.111327 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.111434 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.111465 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.111538 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:23:47.111516745 +0000 UTC m=+52.262451832 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.111571 4624 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.111618 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:47.111602738 +0000 UTC m=+52.262537815 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.111660 4624 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.111702 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:47.11169095 +0000 UTC m=+52.262626027 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.112667 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.112704 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.112714 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.112727 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.112755 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:31Z","lastTransitionTime":"2025-10-08T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.212033 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.212097 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.212209 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.212224 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.212235 4624 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.212275 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:47.2122635 +0000 UTC m=+52.363198577 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.212618 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.212649 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.212660 4624 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.212689 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 14:23:47.21268024 +0000 UTC m=+52.363615317 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.215821 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.215856 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.215866 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.215880 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.215951 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:31Z","lastTransitionTime":"2025-10-08T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.318279 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.318570 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.318579 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.318592 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.318602 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:31Z","lastTransitionTime":"2025-10-08T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.420470 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.420506 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.420522 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.420538 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.420548 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:31Z","lastTransitionTime":"2025-10-08T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.465143 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.465215 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.465234 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.465268 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.465324 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.465394 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.522018 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.522058 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.522069 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.522083 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.522093 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:31Z","lastTransitionTime":"2025-10-08T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.624227 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.624269 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.624285 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.624305 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.624321 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:31Z","lastTransitionTime":"2025-10-08T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.726680 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.726718 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.726743 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.726759 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.726768 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:31Z","lastTransitionTime":"2025-10-08T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.730619 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.730663 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.730672 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.730687 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.730696 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:31Z","lastTransitionTime":"2025-10-08T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.743273 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:31Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.746625 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.746667 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.746675 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.746688 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.746697 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:31Z","lastTransitionTime":"2025-10-08T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.758038 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:31Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.761001 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.761032 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.761074 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.761088 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.761097 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:31Z","lastTransitionTime":"2025-10-08T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.773428 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:31Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.776937 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.776966 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.776976 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.776992 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.777006 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:31Z","lastTransitionTime":"2025-10-08T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.790228 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:31Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.793958 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.793986 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.793994 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.794007 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.794017 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:31Z","lastTransitionTime":"2025-10-08T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.812205 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:31Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.812331 4624 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.828931 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.828968 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.828979 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.828993 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.829003 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:31Z","lastTransitionTime":"2025-10-08T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.918262 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs\") pod \"network-metrics-daemon-qrmz6\" (UID: \"8abf38af-8df3-49f9-9817-b4740d2a8b4a\") " pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.918444 4624 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 14:23:31 crc kubenswrapper[4624]: E1008 14:23:31.918535 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs podName:8abf38af-8df3-49f9-9817-b4740d2a8b4a nodeName:}" failed. No retries permitted until 2025-10-08 14:23:33.918514286 +0000 UTC m=+39.069449433 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs") pod "network-metrics-daemon-qrmz6" (UID: "8abf38af-8df3-49f9-9817-b4740d2a8b4a") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.930773 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.930810 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.930820 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.930843 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:31 crc kubenswrapper[4624]: I1008 14:23:31.930855 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:31Z","lastTransitionTime":"2025-10-08T14:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.032755 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.032783 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.032793 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.032809 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.032820 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:32Z","lastTransitionTime":"2025-10-08T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.135124 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.135149 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.135157 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.135169 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.135178 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:32Z","lastTransitionTime":"2025-10-08T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.237712 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.237747 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.237759 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.237777 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.237790 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:32Z","lastTransitionTime":"2025-10-08T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.340021 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.340094 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.340104 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.340117 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.340139 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:32Z","lastTransitionTime":"2025-10-08T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.442807 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.442849 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.442861 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.442881 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.442893 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:32Z","lastTransitionTime":"2025-10-08T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.465173 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:32 crc kubenswrapper[4624]: E1008 14:23:32.465309 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.544919 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.544954 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.544964 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.544979 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.544990 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:32Z","lastTransitionTime":"2025-10-08T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.647052 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.647084 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.647095 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.647111 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.647119 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:32Z","lastTransitionTime":"2025-10-08T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.748877 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.749762 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.749800 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.749815 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.749826 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:32Z","lastTransitionTime":"2025-10-08T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.852090 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.852126 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.852135 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.852190 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.852201 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:32Z","lastTransitionTime":"2025-10-08T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.954555 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.954592 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.954603 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.954615 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:32 crc kubenswrapper[4624]: I1008 14:23:32.954624 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:32Z","lastTransitionTime":"2025-10-08T14:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.057021 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.057053 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.057061 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.057074 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.057085 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:33Z","lastTransitionTime":"2025-10-08T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.159694 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.159730 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.159740 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.159758 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.159769 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:33Z","lastTransitionTime":"2025-10-08T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.262381 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.262590 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.262686 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.262840 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.262925 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:33Z","lastTransitionTime":"2025-10-08T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.364968 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.364999 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.365007 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.365020 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.365029 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:33Z","lastTransitionTime":"2025-10-08T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.465137 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.465208 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.465238 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:33 crc kubenswrapper[4624]: E1008 14:23:33.465294 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:33 crc kubenswrapper[4624]: E1008 14:23:33.465448 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:33 crc kubenswrapper[4624]: E1008 14:23:33.465785 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.465986 4624 scope.go:117] "RemoveContainer" containerID="469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.466868 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.466892 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.466901 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.466914 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.466923 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:33Z","lastTransitionTime":"2025-10-08T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.571071 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.571103 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.571112 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.571128 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.571140 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:33Z","lastTransitionTime":"2025-10-08T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.674159 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.674196 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.674209 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.674224 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.674233 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:33Z","lastTransitionTime":"2025-10-08T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.754831 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.756022 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"630954dc6b9710e09d5ba88bf292a65247c48ce5109d02f7dfb3c2f268b0cb24"} Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.756715 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.770818 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:33Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.778517 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.778554 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.778569 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.778584 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.778593 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:33Z","lastTransitionTime":"2025-10-08T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.790937 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:33Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.803580 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:33Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.820853 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"message\\\":\\\": failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z]\\\\nI1008 14:23:28.455785 6002 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1008 14:23:28.455785 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455788 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1008 14:23:28.455729 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-zfrv8 after 0 failed attempt(s)\\\\nI1008 14:23:28.455791 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1008 14:23:28.455793 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455795 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-targ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:33Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.830548 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:33Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.841475 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:33Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.852877 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:33Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.872960 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:33Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.881150 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.881191 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.881201 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.881217 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.881227 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:33Z","lastTransitionTime":"2025-10-08T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.885957 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:33Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.895651 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8abf38af-8df3-49f9-9817-b4740d2a8b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrmz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:33Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.909707 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630954dc6b9710e09d5ba88bf292a65247c48ce5109d02f7dfb3c2f268b0cb24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:33Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.921129 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:33Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.936913 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs\") pod \"network-metrics-daemon-qrmz6\" (UID: \"8abf38af-8df3-49f9-9817-b4740d2a8b4a\") " pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:33 crc kubenswrapper[4624]: E1008 14:23:33.937074 4624 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 14:23:33 crc kubenswrapper[4624]: E1008 14:23:33.937170 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs podName:8abf38af-8df3-49f9-9817-b4740d2a8b4a nodeName:}" failed. No retries permitted until 2025-10-08 14:23:37.937148177 +0000 UTC m=+43.088083324 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs") pod "network-metrics-daemon-qrmz6" (UID: "8abf38af-8df3-49f9-9817-b4740d2a8b4a") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.937813 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:33Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.950210 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:33Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.958612 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:33Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.968545 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:33Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.983027 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.983056 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.983064 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.983079 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:33 crc kubenswrapper[4624]: I1008 14:23:33.983096 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:33Z","lastTransitionTime":"2025-10-08T14:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.085466 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.085509 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.085516 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.085531 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.085540 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:34Z","lastTransitionTime":"2025-10-08T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.188501 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.188534 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.188546 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.188561 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.188571 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:34Z","lastTransitionTime":"2025-10-08T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.291250 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.291331 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.291343 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.291369 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.291381 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:34Z","lastTransitionTime":"2025-10-08T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.394016 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.394059 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.394073 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.394089 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.394100 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:34Z","lastTransitionTime":"2025-10-08T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.465355 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:34 crc kubenswrapper[4624]: E1008 14:23:34.465502 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.496478 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.496517 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.496528 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.496545 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.496557 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:34Z","lastTransitionTime":"2025-10-08T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.599124 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.599157 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.599166 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.599179 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.599189 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:34Z","lastTransitionTime":"2025-10-08T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.701844 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.701890 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.701900 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.701917 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.701928 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:34Z","lastTransitionTime":"2025-10-08T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.804092 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.804122 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.804130 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.804142 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.804151 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:34Z","lastTransitionTime":"2025-10-08T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.906235 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.906266 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.906274 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.906288 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:34 crc kubenswrapper[4624]: I1008 14:23:34.906296 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:34Z","lastTransitionTime":"2025-10-08T14:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.008382 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.008420 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.008431 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.008446 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.008457 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:35Z","lastTransitionTime":"2025-10-08T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.110475 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.110510 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.110519 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.110531 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.110540 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:35Z","lastTransitionTime":"2025-10-08T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.151865 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.152519 4624 scope.go:117] "RemoveContainer" containerID="1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e" Oct 08 14:23:35 crc kubenswrapper[4624]: E1008 14:23:35.152679 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.213101 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.213160 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.213175 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.213193 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.213205 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:35Z","lastTransitionTime":"2025-10-08T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.314842 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.314881 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.314889 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.314901 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.314910 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:35Z","lastTransitionTime":"2025-10-08T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.417552 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.417601 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.417609 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.417621 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.417630 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:35Z","lastTransitionTime":"2025-10-08T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.464778 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.464874 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:35 crc kubenswrapper[4624]: E1008 14:23:35.464976 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:35 crc kubenswrapper[4624]: E1008 14:23:35.465201 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.465487 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:35 crc kubenswrapper[4624]: E1008 14:23:35.465573 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.476458 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:35Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.484359 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8abf38af-8df3-49f9-9817-b4740d2a8b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrmz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:35Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.495079 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:35Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.505825 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:35Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.516937 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630954dc6b9710e09d5ba88bf292a65247c48ce5109d02f7dfb3c2f268b0cb24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:35Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.520830 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.520864 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.520873 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.520887 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.520897 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:35Z","lastTransitionTime":"2025-10-08T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.527702 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:35Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.540118 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:35Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.551802 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:35Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.564924 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:35Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.577393 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:35Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.587553 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:35Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.598745 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:35Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.611017 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:35Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.622395 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:35Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.623646 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.623676 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.623686 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.623702 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.623714 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:35Z","lastTransitionTime":"2025-10-08T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.637344 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:35Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.653692 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"message\\\":\\\": failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z]\\\\nI1008 14:23:28.455785 6002 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1008 14:23:28.455785 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455788 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1008 14:23:28.455729 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-zfrv8 after 0 failed attempt(s)\\\\nI1008 14:23:28.455791 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1008 14:23:28.455793 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455795 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-targ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:35Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.725453 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.725490 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.725499 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.725512 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.725522 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:35Z","lastTransitionTime":"2025-10-08T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.827739 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.827791 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.827804 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.827819 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.827829 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:35Z","lastTransitionTime":"2025-10-08T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.929817 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.929853 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.929862 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.929874 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:35 crc kubenswrapper[4624]: I1008 14:23:35.929885 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:35Z","lastTransitionTime":"2025-10-08T14:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.033555 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.033602 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.033613 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.033630 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.033661 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:36Z","lastTransitionTime":"2025-10-08T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.136958 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.137026 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.137035 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.137055 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.137085 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:36Z","lastTransitionTime":"2025-10-08T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.239798 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.239850 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.239863 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.239876 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.239888 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:36Z","lastTransitionTime":"2025-10-08T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.342290 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.342354 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.342372 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.342388 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.342400 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:36Z","lastTransitionTime":"2025-10-08T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.444303 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.444356 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.444373 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.444391 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.444415 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:36Z","lastTransitionTime":"2025-10-08T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.464793 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:36 crc kubenswrapper[4624]: E1008 14:23:36.464936 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.546414 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.546450 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.546468 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.546486 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.546497 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:36Z","lastTransitionTime":"2025-10-08T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.648434 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.648464 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.648476 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.648490 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.648500 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:36Z","lastTransitionTime":"2025-10-08T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.750508 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.750538 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.750548 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.750564 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.750575 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:36Z","lastTransitionTime":"2025-10-08T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.853070 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.853119 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.853131 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.853147 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.853157 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:36Z","lastTransitionTime":"2025-10-08T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.955164 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.955199 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.955207 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.955219 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:36 crc kubenswrapper[4624]: I1008 14:23:36.955229 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:36Z","lastTransitionTime":"2025-10-08T14:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.056883 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.056919 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.056929 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.056944 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.056955 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:37Z","lastTransitionTime":"2025-10-08T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.158666 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.158707 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.158722 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.158738 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.158750 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:37Z","lastTransitionTime":"2025-10-08T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.261430 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.261465 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.261474 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.261491 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.261501 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:37Z","lastTransitionTime":"2025-10-08T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.363817 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.363877 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.363888 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.363902 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.363915 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:37Z","lastTransitionTime":"2025-10-08T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.465095 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.465229 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:37 crc kubenswrapper[4624]: E1008 14:23:37.465288 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.465223 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:37 crc kubenswrapper[4624]: E1008 14:23:37.466204 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:37 crc kubenswrapper[4624]: E1008 14:23:37.465491 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.466731 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.466760 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.466771 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.466786 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.466801 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:37Z","lastTransitionTime":"2025-10-08T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.569717 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.569748 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.569759 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.569775 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.569787 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:37Z","lastTransitionTime":"2025-10-08T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.672375 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.672452 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.672463 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.672478 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.672487 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:37Z","lastTransitionTime":"2025-10-08T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.775285 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.775400 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.775430 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.775796 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.775837 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:37Z","lastTransitionTime":"2025-10-08T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.878817 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.878867 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.878895 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.878909 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.878918 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:37Z","lastTransitionTime":"2025-10-08T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.973908 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs\") pod \"network-metrics-daemon-qrmz6\" (UID: \"8abf38af-8df3-49f9-9817-b4740d2a8b4a\") " pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:37 crc kubenswrapper[4624]: E1008 14:23:37.974091 4624 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 14:23:37 crc kubenswrapper[4624]: E1008 14:23:37.974151 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs podName:8abf38af-8df3-49f9-9817-b4740d2a8b4a nodeName:}" failed. No retries permitted until 2025-10-08 14:23:45.974138262 +0000 UTC m=+51.125073329 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs") pod "network-metrics-daemon-qrmz6" (UID: "8abf38af-8df3-49f9-9817-b4740d2a8b4a") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.982286 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.982334 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.982346 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.982364 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:37 crc kubenswrapper[4624]: I1008 14:23:37.982375 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:37Z","lastTransitionTime":"2025-10-08T14:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.085505 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.085543 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.085554 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.085571 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.085584 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:38Z","lastTransitionTime":"2025-10-08T14:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.187615 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.187669 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.187682 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.187702 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.187718 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:38Z","lastTransitionTime":"2025-10-08T14:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.290292 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.290333 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.290348 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.290369 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.290385 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:38Z","lastTransitionTime":"2025-10-08T14:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.393569 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.393654 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.393678 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.393699 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.393714 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:38Z","lastTransitionTime":"2025-10-08T14:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.464901 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:38 crc kubenswrapper[4624]: E1008 14:23:38.465030 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.496228 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.496261 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.496270 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.496284 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.496292 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:38Z","lastTransitionTime":"2025-10-08T14:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.601190 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.601238 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.601250 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.601266 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.601278 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:38Z","lastTransitionTime":"2025-10-08T14:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.703039 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.703074 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.703084 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.703097 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.703105 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:38Z","lastTransitionTime":"2025-10-08T14:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.805310 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.805371 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.805384 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.805407 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.805422 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:38Z","lastTransitionTime":"2025-10-08T14:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.907759 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.907809 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.907820 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.907836 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:38 crc kubenswrapper[4624]: I1008 14:23:38.907850 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:38Z","lastTransitionTime":"2025-10-08T14:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.009754 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.009792 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.009800 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.009814 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.009826 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:39Z","lastTransitionTime":"2025-10-08T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.111838 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.111881 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.111897 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.111912 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.111922 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:39Z","lastTransitionTime":"2025-10-08T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.214573 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.214619 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.214629 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.214668 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.214680 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:39Z","lastTransitionTime":"2025-10-08T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.317431 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.317459 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.317467 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.317481 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.317490 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:39Z","lastTransitionTime":"2025-10-08T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.419839 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.419879 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.419888 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.419904 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.419913 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:39Z","lastTransitionTime":"2025-10-08T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.464952 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.465222 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:39 crc kubenswrapper[4624]: E1008 14:23:39.465289 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.465313 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:39 crc kubenswrapper[4624]: E1008 14:23:39.465457 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:39 crc kubenswrapper[4624]: E1008 14:23:39.465564 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.522456 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.522504 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.522515 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.522534 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.522546 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:39Z","lastTransitionTime":"2025-10-08T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.625592 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.625667 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.625681 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.625704 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.625720 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:39Z","lastTransitionTime":"2025-10-08T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.728335 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.728370 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.728378 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.728391 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.728399 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:39Z","lastTransitionTime":"2025-10-08T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.830208 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.830248 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.830267 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.830289 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.830300 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:39Z","lastTransitionTime":"2025-10-08T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.932095 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.932126 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.932161 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.932174 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:39 crc kubenswrapper[4624]: I1008 14:23:39.932185 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:39Z","lastTransitionTime":"2025-10-08T14:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.033611 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.033860 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.033962 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.034032 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.034108 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:40Z","lastTransitionTime":"2025-10-08T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.136475 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.136509 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.136519 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.136535 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.136545 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:40Z","lastTransitionTime":"2025-10-08T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.238238 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.238276 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.238287 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.238302 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.238313 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:40Z","lastTransitionTime":"2025-10-08T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.340556 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.340658 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.340672 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.340694 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.340706 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:40Z","lastTransitionTime":"2025-10-08T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.443065 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.443112 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.443123 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.443138 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.443150 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:40Z","lastTransitionTime":"2025-10-08T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.465541 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:40 crc kubenswrapper[4624]: E1008 14:23:40.465678 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.545475 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.545512 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.545522 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.545536 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.545546 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:40Z","lastTransitionTime":"2025-10-08T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.647926 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.647962 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.647978 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.647997 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.648009 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:40Z","lastTransitionTime":"2025-10-08T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.750196 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.750250 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.750262 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.750275 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.750287 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:40Z","lastTransitionTime":"2025-10-08T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.852620 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.852675 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.852684 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.852696 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.852706 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:40Z","lastTransitionTime":"2025-10-08T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.955151 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.955194 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.955205 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.955222 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:40 crc kubenswrapper[4624]: I1008 14:23:40.955234 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:40Z","lastTransitionTime":"2025-10-08T14:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.057864 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.057897 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.057904 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.057917 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.057927 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:41Z","lastTransitionTime":"2025-10-08T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.160600 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.160645 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.160654 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.160668 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.160676 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:41Z","lastTransitionTime":"2025-10-08T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.265592 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.265628 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.265662 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.265679 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.265694 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:41Z","lastTransitionTime":"2025-10-08T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.368415 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.368470 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.368483 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.368501 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.368513 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:41Z","lastTransitionTime":"2025-10-08T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.465466 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.465500 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.465466 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:41 crc kubenswrapper[4624]: E1008 14:23:41.465601 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:41 crc kubenswrapper[4624]: E1008 14:23:41.465671 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:41 crc kubenswrapper[4624]: E1008 14:23:41.465725 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.470298 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.470333 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.470345 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.470359 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.470369 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:41Z","lastTransitionTime":"2025-10-08T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.572749 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.572785 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.572794 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.572808 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.572817 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:41Z","lastTransitionTime":"2025-10-08T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.674932 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.674974 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.674984 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.674997 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.675007 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:41Z","lastTransitionTime":"2025-10-08T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.776895 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.777113 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.777226 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.777354 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.777461 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:41Z","lastTransitionTime":"2025-10-08T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.881803 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.881874 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.881888 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.881914 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.881929 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:41Z","lastTransitionTime":"2025-10-08T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.990451 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.990502 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.990512 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.990528 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:41 crc kubenswrapper[4624]: I1008 14:23:41.990538 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:41Z","lastTransitionTime":"2025-10-08T14:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.092711 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.092744 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.092753 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.092767 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.092778 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:42Z","lastTransitionTime":"2025-10-08T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.195029 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.195057 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.195068 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.195083 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.195093 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:42Z","lastTransitionTime":"2025-10-08T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.205400 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.205436 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.205445 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.205459 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.205470 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:42Z","lastTransitionTime":"2025-10-08T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:42 crc kubenswrapper[4624]: E1008 14:23:42.216739 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:42Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.220704 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.220737 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.220747 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.220760 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.220769 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:42Z","lastTransitionTime":"2025-10-08T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:42 crc kubenswrapper[4624]: E1008 14:23:42.231505 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:42Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.234723 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.234880 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.234974 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.235059 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.235139 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:42Z","lastTransitionTime":"2025-10-08T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:42 crc kubenswrapper[4624]: E1008 14:23:42.245624 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:42Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.249198 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.249276 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.249491 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.249522 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.249792 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:42Z","lastTransitionTime":"2025-10-08T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:42 crc kubenswrapper[4624]: E1008 14:23:42.264024 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:42Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.268096 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.268260 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.268417 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.268524 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.268673 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:42Z","lastTransitionTime":"2025-10-08T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:42 crc kubenswrapper[4624]: E1008 14:23:42.279938 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:42Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:42 crc kubenswrapper[4624]: E1008 14:23:42.280351 4624 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.297090 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.297349 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.297418 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.297483 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.297567 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:42Z","lastTransitionTime":"2025-10-08T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.399755 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.399798 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.399809 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.399824 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.399835 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:42Z","lastTransitionTime":"2025-10-08T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.465449 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:42 crc kubenswrapper[4624]: E1008 14:23:42.465578 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.501843 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.501882 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.501893 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.501909 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.501920 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:42Z","lastTransitionTime":"2025-10-08T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.603917 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.604128 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.604237 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.604345 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.604443 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:42Z","lastTransitionTime":"2025-10-08T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.706919 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.706963 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.706974 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.706992 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.707006 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:42Z","lastTransitionTime":"2025-10-08T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.810129 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.810165 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.810178 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.810193 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.810203 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:42Z","lastTransitionTime":"2025-10-08T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.915082 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.915132 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.915148 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.915229 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:42 crc kubenswrapper[4624]: I1008 14:23:42.915249 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:42Z","lastTransitionTime":"2025-10-08T14:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.017498 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.017522 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.017567 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.017583 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.017591 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:43Z","lastTransitionTime":"2025-10-08T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.119494 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.119550 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.119563 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.119578 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.119588 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:43Z","lastTransitionTime":"2025-10-08T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.222548 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.222873 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.223022 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.223116 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.223208 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:43Z","lastTransitionTime":"2025-10-08T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.325778 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.325824 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.325839 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.325859 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.325874 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:43Z","lastTransitionTime":"2025-10-08T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.428769 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.428806 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.428832 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.428853 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.428867 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:43Z","lastTransitionTime":"2025-10-08T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.465894 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.465929 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:43 crc kubenswrapper[4624]: E1008 14:23:43.466771 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.465937 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:43 crc kubenswrapper[4624]: E1008 14:23:43.467252 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:43 crc kubenswrapper[4624]: E1008 14:23:43.467515 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.530904 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.530956 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.530970 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.530992 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.531008 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:43Z","lastTransitionTime":"2025-10-08T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.633554 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.633586 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.633598 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.633613 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.633623 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:43Z","lastTransitionTime":"2025-10-08T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.735727 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.735756 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.735765 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.735777 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.735786 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:43Z","lastTransitionTime":"2025-10-08T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.838217 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.838287 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.838304 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.838335 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.838365 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:43Z","lastTransitionTime":"2025-10-08T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.941315 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.941373 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.941390 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.941417 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:43 crc kubenswrapper[4624]: I1008 14:23:43.941434 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:43Z","lastTransitionTime":"2025-10-08T14:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.043027 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.043060 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.043103 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.043143 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.043155 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:44Z","lastTransitionTime":"2025-10-08T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.145518 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.145559 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.145569 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.145585 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.145596 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:44Z","lastTransitionTime":"2025-10-08T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.248339 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.248384 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.248393 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.248408 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.248418 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:44Z","lastTransitionTime":"2025-10-08T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.350518 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.350809 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.350882 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.350953 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.351057 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:44Z","lastTransitionTime":"2025-10-08T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.453259 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.453302 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.453313 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.453328 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.453339 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:44Z","lastTransitionTime":"2025-10-08T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.465597 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:44 crc kubenswrapper[4624]: E1008 14:23:44.465765 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.555736 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.556259 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.556336 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.556466 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.556583 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:44Z","lastTransitionTime":"2025-10-08T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.658751 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.659032 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.659114 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.659188 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.659260 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:44Z","lastTransitionTime":"2025-10-08T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.761977 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.762252 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.762332 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.762402 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.762468 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:44Z","lastTransitionTime":"2025-10-08T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.864567 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.864606 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.864616 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.864656 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.864668 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:44Z","lastTransitionTime":"2025-10-08T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.967414 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.967485 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.967502 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.967529 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.967551 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:44Z","lastTransitionTime":"2025-10-08T14:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.987486 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 14:23:44 crc kubenswrapper[4624]: I1008 14:23:44.999891 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:44Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.012011 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.024426 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.037338 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.055013 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"message\\\":\\\": failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z]\\\\nI1008 14:23:28.455785 6002 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1008 14:23:28.455785 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455788 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1008 14:23:28.455729 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-zfrv8 after 0 failed attempt(s)\\\\nI1008 14:23:28.455791 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1008 14:23:28.455793 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455795 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-targ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.067596 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.070238 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.070299 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.070313 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.070332 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.070351 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:45Z","lastTransitionTime":"2025-10-08T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.083799 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8abf38af-8df3-49f9-9817-b4740d2a8b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrmz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.095916 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.109955 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.121939 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.134767 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630954dc6b9710e09d5ba88bf292a65247c48ce5109d02f7dfb3c2f268b0cb24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.145287 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.158178 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.169131 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.171858 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.171891 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.171903 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.171919 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.171929 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:45Z","lastTransitionTime":"2025-10-08T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.179375 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.191203 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.274610 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.274669 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.274679 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.274692 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.274701 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:45Z","lastTransitionTime":"2025-10-08T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.377404 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.377443 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.377453 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.377467 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.377477 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:45Z","lastTransitionTime":"2025-10-08T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.465054 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:45 crc kubenswrapper[4624]: E1008 14:23:45.465189 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.465060 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:45 crc kubenswrapper[4624]: E1008 14:23:45.465545 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.465866 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:45 crc kubenswrapper[4624]: E1008 14:23:45.466042 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.477967 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630954dc6b9710e09d5ba88bf292a65247c48ce5109d02f7dfb3c2f268b0cb24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.479190 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.479216 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.479225 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.479241 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.479250 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:45Z","lastTransitionTime":"2025-10-08T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.488017 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.499620 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.503185 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.511741 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.512319 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.522266 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.530121 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.545678 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"message\\\":\\\": failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z]\\\\nI1008 14:23:28.455785 6002 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1008 14:23:28.455785 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455788 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1008 14:23:28.455729 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-zfrv8 after 0 failed attempt(s)\\\\nI1008 14:23:28.455791 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1008 14:23:28.455793 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455795 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-targ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.558197 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.569626 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.581175 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.581219 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.581230 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.581246 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.581258 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:45Z","lastTransitionTime":"2025-10-08T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.582337 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.593983 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.604386 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.615129 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.625502 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.635349 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8abf38af-8df3-49f9-9817-b4740d2a8b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrmz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.644811 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.656176 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630954dc6b9710e09d5ba88bf292a65247c48ce5109d02f7dfb3c2f268b0cb24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.666429 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.679364 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.683419 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.683448 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.683456 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.683470 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.683479 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:45Z","lastTransitionTime":"2025-10-08T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.691080 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.701776 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.715154 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.737670 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"message\\\":\\\": failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z]\\\\nI1008 14:23:28.455785 6002 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1008 14:23:28.455785 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455788 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1008 14:23:28.455729 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-zfrv8 after 0 failed attempt(s)\\\\nI1008 14:23:28.455791 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1008 14:23:28.455793 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455795 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-targ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.755888 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.766091 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.776595 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.785870 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.785900 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.785908 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.785922 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.785930 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:45Z","lastTransitionTime":"2025-10-08T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.787713 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.798565 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.810357 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.822512 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.832927 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8abf38af-8df3-49f9-9817-b4740d2a8b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrmz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.846720 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a8e0a93-2abf-4287-b5b4-14d480ab7808\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://920d41c8a17e549dbc9e474897693799091e5686977403a61bfef3b5373963ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66582c16b1a735d597ac77df3780cd199953116b97835aed248c275ff3eea23c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2ce3e7ea0415656d449e2f2b4677e2ff8654984ff2417e29fd6a240938eefc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.859008 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:45Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.888950 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.889008 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.889020 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.889036 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.889047 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:45Z","lastTransitionTime":"2025-10-08T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.991689 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.991724 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.991736 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.991775 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:45 crc kubenswrapper[4624]: I1008 14:23:45.991788 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:45Z","lastTransitionTime":"2025-10-08T14:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.058271 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs\") pod \"network-metrics-daemon-qrmz6\" (UID: \"8abf38af-8df3-49f9-9817-b4740d2a8b4a\") " pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:46 crc kubenswrapper[4624]: E1008 14:23:46.058447 4624 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 14:23:46 crc kubenswrapper[4624]: E1008 14:23:46.058555 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs podName:8abf38af-8df3-49f9-9817-b4740d2a8b4a nodeName:}" failed. No retries permitted until 2025-10-08 14:24:02.058537261 +0000 UTC m=+67.209472338 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs") pod "network-metrics-daemon-qrmz6" (UID: "8abf38af-8df3-49f9-9817-b4740d2a8b4a") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.093688 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.093727 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.093738 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.093753 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.093764 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:46Z","lastTransitionTime":"2025-10-08T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.195989 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.196016 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.196024 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.196036 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.196045 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:46Z","lastTransitionTime":"2025-10-08T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.298189 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.298229 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.298238 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.298251 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.298261 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:46Z","lastTransitionTime":"2025-10-08T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.399992 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.400055 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.400076 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.400092 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.400104 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:46Z","lastTransitionTime":"2025-10-08T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.464997 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:46 crc kubenswrapper[4624]: E1008 14:23:46.465151 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.502071 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.502103 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.502111 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.502124 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.502136 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:46Z","lastTransitionTime":"2025-10-08T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.604065 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.604126 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.604137 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.604154 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.604164 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:46Z","lastTransitionTime":"2025-10-08T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.706371 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.706414 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.706424 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.706440 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.706453 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:46Z","lastTransitionTime":"2025-10-08T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.808934 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.808987 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.808998 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.809015 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.809026 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:46Z","lastTransitionTime":"2025-10-08T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.912357 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.912401 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.912411 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.912426 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:46 crc kubenswrapper[4624]: I1008 14:23:46.912436 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:46Z","lastTransitionTime":"2025-10-08T14:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.014553 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.014584 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.014596 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.014610 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.014619 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:47Z","lastTransitionTime":"2025-10-08T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.116949 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.116979 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.116988 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.117004 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.117015 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:47Z","lastTransitionTime":"2025-10-08T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.167815 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.168003 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.168044 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:47 crc kubenswrapper[4624]: E1008 14:23:47.168130 4624 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 14:23:47 crc kubenswrapper[4624]: E1008 14:23:47.168170 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:24:19.168124963 +0000 UTC m=+84.319060050 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:23:47 crc kubenswrapper[4624]: E1008 14:23:47.168250 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 14:24:19.168228206 +0000 UTC m=+84.319163453 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 14:23:47 crc kubenswrapper[4624]: E1008 14:23:47.168255 4624 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 14:23:47 crc kubenswrapper[4624]: E1008 14:23:47.168329 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 14:24:19.168319338 +0000 UTC m=+84.319254425 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.219954 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.220009 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.220020 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.220043 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.220055 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:47Z","lastTransitionTime":"2025-10-08T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.269045 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.269146 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:47 crc kubenswrapper[4624]: E1008 14:23:47.269345 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 14:23:47 crc kubenswrapper[4624]: E1008 14:23:47.269376 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 14:23:47 crc kubenswrapper[4624]: E1008 14:23:47.269396 4624 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:47 crc kubenswrapper[4624]: E1008 14:23:47.269345 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 14:23:47 crc kubenswrapper[4624]: E1008 14:23:47.269478 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 14:23:47 crc kubenswrapper[4624]: E1008 14:23:47.269501 4624 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:47 crc kubenswrapper[4624]: E1008 14:23:47.269481 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 14:24:19.269453262 +0000 UTC m=+84.420388359 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:47 crc kubenswrapper[4624]: E1008 14:23:47.269574 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 14:24:19.269553634 +0000 UTC m=+84.420488721 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.322216 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.322284 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.322298 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.322324 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.322337 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:47Z","lastTransitionTime":"2025-10-08T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.424688 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.424721 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.424729 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.424760 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.424769 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:47Z","lastTransitionTime":"2025-10-08T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.465626 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.465660 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:47 crc kubenswrapper[4624]: E1008 14:23:47.465767 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.465805 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:47 crc kubenswrapper[4624]: E1008 14:23:47.465907 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:47 crc kubenswrapper[4624]: E1008 14:23:47.465977 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.526746 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.526777 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.526784 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.526797 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.526806 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:47Z","lastTransitionTime":"2025-10-08T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.629118 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.629161 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.629172 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.629188 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.629201 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:47Z","lastTransitionTime":"2025-10-08T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.731210 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.731240 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.731248 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.731278 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.731290 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:47Z","lastTransitionTime":"2025-10-08T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.832880 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.832911 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.832921 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.832950 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.832960 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:47Z","lastTransitionTime":"2025-10-08T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.935567 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.935991 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.936004 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.936019 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:47 crc kubenswrapper[4624]: I1008 14:23:47.936031 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:47Z","lastTransitionTime":"2025-10-08T14:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.038864 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.038897 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.038906 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.038919 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.038928 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:48Z","lastTransitionTime":"2025-10-08T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.141152 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.141189 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.141198 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.141213 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.141226 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:48Z","lastTransitionTime":"2025-10-08T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.243386 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.243423 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.243435 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.243449 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.243458 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:48Z","lastTransitionTime":"2025-10-08T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.346461 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.346495 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.346503 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.346516 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.346524 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:48Z","lastTransitionTime":"2025-10-08T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.448855 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.448916 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.448932 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.448951 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.448962 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:48Z","lastTransitionTime":"2025-10-08T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.465284 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:48 crc kubenswrapper[4624]: E1008 14:23:48.465439 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.466134 4624 scope.go:117] "RemoveContainer" containerID="1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.550738 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.550764 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.550771 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.550785 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.550793 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:48Z","lastTransitionTime":"2025-10-08T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.653054 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.653090 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.653099 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.653112 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.653120 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:48Z","lastTransitionTime":"2025-10-08T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.755714 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.755742 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.755749 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.755762 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.755770 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:48Z","lastTransitionTime":"2025-10-08T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.798178 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovnkube-controller/1.log" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.800258 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerStarted","Data":"d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d"} Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.801099 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.815352 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:48Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.834285 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:48Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.857894 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.857931 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.857943 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.857958 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.857969 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:48Z","lastTransitionTime":"2025-10-08T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.858127 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:48Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.885424 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"message\\\":\\\": failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z]\\\\nI1008 14:23:28.455785 6002 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1008 14:23:28.455785 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455788 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1008 14:23:28.455729 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-zfrv8 after 0 failed attempt(s)\\\\nI1008 14:23:28.455791 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1008 14:23:28.455793 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455795 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-targ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:48Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.895281 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:48Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.909520 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:48Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.927770 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a8e0a93-2abf-4287-b5b4-14d480ab7808\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://920d41c8a17e549dbc9e474897693799091e5686977403a61bfef3b5373963ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66582c16b1a735d597ac77df3780cd199953116b97835aed248c275ff3eea23c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2ce3e7ea0415656d449e2f2b4677e2ff8654984ff2417e29fd6a240938eefc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:48Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.938968 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:48Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.949890 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:48Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.960293 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.960344 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.960352 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.960364 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.960372 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:48Z","lastTransitionTime":"2025-10-08T14:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.962349 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:48Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.972752 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8abf38af-8df3-49f9-9817-b4740d2a8b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrmz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:48Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.989681 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630954dc6b9710e09d5ba88bf292a65247c48ce5109d02f7dfb3c2f268b0cb24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:48Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:48 crc kubenswrapper[4624]: I1008 14:23:48.999392 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:48Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.011717 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:49Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.028307 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:49Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.039298 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:49Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.051093 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:49Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.062682 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.062715 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.062724 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.062738 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.062748 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:49Z","lastTransitionTime":"2025-10-08T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.164987 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.165019 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.165031 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.165047 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.165059 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:49Z","lastTransitionTime":"2025-10-08T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.267951 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.267993 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.268003 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.268019 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.268029 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:49Z","lastTransitionTime":"2025-10-08T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.370216 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.370260 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.370271 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.370286 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.370296 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:49Z","lastTransitionTime":"2025-10-08T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.465756 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.465844 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.465892 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:49 crc kubenswrapper[4624]: E1008 14:23:49.465939 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:49 crc kubenswrapper[4624]: E1008 14:23:49.466020 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:49 crc kubenswrapper[4624]: E1008 14:23:49.466105 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.472023 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.472049 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.472058 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.472071 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.472081 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:49Z","lastTransitionTime":"2025-10-08T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.574092 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.574127 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.574138 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.574155 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.574165 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:49Z","lastTransitionTime":"2025-10-08T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.676542 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.676571 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.676580 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.676593 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.676601 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:49Z","lastTransitionTime":"2025-10-08T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.779781 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.779818 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.779827 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.779854 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.779865 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:49Z","lastTransitionTime":"2025-10-08T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.804702 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovnkube-controller/2.log" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.805558 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovnkube-controller/1.log" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.808531 4624 generic.go:334] "Generic (PLEG): container finished" podID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerID="d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d" exitCode=1 Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.808585 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerDied","Data":"d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d"} Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.808819 4624 scope.go:117] "RemoveContainer" containerID="1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.809349 4624 scope.go:117] "RemoveContainer" containerID="d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d" Oct 08 14:23:49 crc kubenswrapper[4624]: E1008 14:23:49.809506 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.828566 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:49Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.840440 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:49Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.854310 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:49Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.868481 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:49Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.882877 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.882913 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.882924 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.882939 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.882952 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:49Z","lastTransitionTime":"2025-10-08T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.889550 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1da11db27a94a8ccd7bde62fe68c5c2aa5c92bba64d2b9f67449976a037d893e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"message\\\":\\\": failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:28Z is after 2025-08-24T17:21:41Z]\\\\nI1008 14:23:28.455785 6002 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1008 14:23:28.455785 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455788 6002 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1008 14:23:28.455729 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-zfrv8 after 0 failed attempt(s)\\\\nI1008 14:23:28.455791 6002 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1008 14:23:28.455793 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1008 14:23:28.455795 6002 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-targ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:49Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1008 14:23:49.136225 6274 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.136588 6274 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.136858 6274 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.137282 6274 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 14:23:49.137333 6274 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 14:23:49.137399 6274 factory.go:656] Stopping watch factory\\\\nI1008 14:23:49.137419 6274 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 14:23:49.137429 6274 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 14:23:49.168490 6274 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1008 14:23:49.168518 6274 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1008 14:23:49.168570 6274 ovnkube.go:599] Stopped ovnkube\\\\nI1008 14:23:49.168596 6274 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 14:23:49.168689 6274 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:49Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.900455 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:49Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.911738 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:49Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.923048 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:49Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.933533 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:49Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.943470 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:49Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.955668 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:49Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.966047 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:49Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.978310 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8abf38af-8df3-49f9-9817-b4740d2a8b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrmz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:49Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.985138 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.985165 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.985173 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.985186 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.985194 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:49Z","lastTransitionTime":"2025-10-08T14:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:49 crc kubenswrapper[4624]: I1008 14:23:49.988915 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a8e0a93-2abf-4287-b5b4-14d480ab7808\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://920d41c8a17e549dbc9e474897693799091e5686977403a61bfef3b5373963ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66582c16b1a735d597ac77df3780cd199953116b97835aed248c275ff3eea23c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2ce3e7ea0415656d449e2f2b4677e2ff8654984ff2417e29fd6a240938eefc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:49Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.002899 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.014948 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630954dc6b9710e09d5ba88bf292a65247c48ce5109d02f7dfb3c2f268b0cb24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.023691 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.087712 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.087740 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.087749 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.087761 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.087770 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:50Z","lastTransitionTime":"2025-10-08T14:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.189967 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.189995 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.190002 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.190015 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.190024 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:50Z","lastTransitionTime":"2025-10-08T14:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.292067 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.292292 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.292384 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.292511 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.292608 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:50Z","lastTransitionTime":"2025-10-08T14:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.395318 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.395616 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.395735 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.395838 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.395926 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:50Z","lastTransitionTime":"2025-10-08T14:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.465158 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:50 crc kubenswrapper[4624]: E1008 14:23:50.465466 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.498020 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.498059 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.498072 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.498087 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.498098 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:50Z","lastTransitionTime":"2025-10-08T14:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.600444 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.600485 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.600496 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.600515 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.600525 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:50Z","lastTransitionTime":"2025-10-08T14:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.701787 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.701812 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.701820 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.701833 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.701843 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:50Z","lastTransitionTime":"2025-10-08T14:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.803902 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.803929 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.803938 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.803950 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.803958 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:50Z","lastTransitionTime":"2025-10-08T14:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.812818 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovnkube-controller/2.log" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.815494 4624 scope.go:117] "RemoveContainer" containerID="d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d" Oct 08 14:23:50 crc kubenswrapper[4624]: E1008 14:23:50.815625 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.828505 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.845991 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:49Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1008 14:23:49.136225 6274 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.136588 6274 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.136858 6274 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.137282 6274 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 14:23:49.137333 6274 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 14:23:49.137399 6274 factory.go:656] Stopping watch factory\\\\nI1008 14:23:49.137419 6274 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 14:23:49.137429 6274 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 14:23:49.168490 6274 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1008 14:23:49.168518 6274 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1008 14:23:49.168570 6274 ovnkube.go:599] Stopped ovnkube\\\\nI1008 14:23:49.168596 6274 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 14:23:49.168689 6274 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.854573 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.863240 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.876101 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.885674 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.894856 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.905807 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.906002 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.906108 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.906208 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.906303 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:50Z","lastTransitionTime":"2025-10-08T14:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.906296 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.916574 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.924946 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8abf38af-8df3-49f9-9817-b4740d2a8b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrmz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.936707 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a8e0a93-2abf-4287-b5b4-14d480ab7808\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://920d41c8a17e549dbc9e474897693799091e5686977403a61bfef3b5373963ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66582c16b1a735d597ac77df3780cd199953116b97835aed248c275ff3eea23c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2ce3e7ea0415656d449e2f2b4677e2ff8654984ff2417e29fd6a240938eefc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.956496 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.969245 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630954dc6b9710e09d5ba88bf292a65247c48ce5109d02f7dfb3c2f268b0cb24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.978711 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.987127 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:50 crc kubenswrapper[4624]: I1008 14:23:50.998416 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:50Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.008898 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.008933 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.008943 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.008955 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.008964 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:51Z","lastTransitionTime":"2025-10-08T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.010331 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:51Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.111127 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.111166 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.111182 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.111204 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.111216 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:51Z","lastTransitionTime":"2025-10-08T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.213765 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.213810 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.213822 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.213837 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.217285 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:51Z","lastTransitionTime":"2025-10-08T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.319373 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.319402 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.319412 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.319429 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.319440 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:51Z","lastTransitionTime":"2025-10-08T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.421449 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.421478 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.421486 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.421499 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.421507 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:51Z","lastTransitionTime":"2025-10-08T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.467708 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:51 crc kubenswrapper[4624]: E1008 14:23:51.467818 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.468006 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:51 crc kubenswrapper[4624]: E1008 14:23:51.468066 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.468200 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:51 crc kubenswrapper[4624]: E1008 14:23:51.468268 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.524182 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.524483 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.524588 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.524710 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.524815 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:51Z","lastTransitionTime":"2025-10-08T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.627331 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.627582 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.627678 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.627793 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.627869 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:51Z","lastTransitionTime":"2025-10-08T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.730725 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.730761 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.730772 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.730786 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.730795 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:51Z","lastTransitionTime":"2025-10-08T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.833179 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.833210 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.833218 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.833231 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.833241 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:51Z","lastTransitionTime":"2025-10-08T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.935807 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.935844 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.935854 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.935868 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:51 crc kubenswrapper[4624]: I1008 14:23:51.935879 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:51Z","lastTransitionTime":"2025-10-08T14:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.038128 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.039008 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.039096 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.039201 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.039296 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:52Z","lastTransitionTime":"2025-10-08T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.142278 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.142546 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.142707 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.142786 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.142859 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:52Z","lastTransitionTime":"2025-10-08T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.245308 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.245343 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.245352 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.245367 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.245380 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:52Z","lastTransitionTime":"2025-10-08T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.347550 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.347591 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.347602 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.347617 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.347627 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:52Z","lastTransitionTime":"2025-10-08T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.449847 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.449876 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.449884 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.449897 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.449906 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:52Z","lastTransitionTime":"2025-10-08T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.464709 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:52 crc kubenswrapper[4624]: E1008 14:23:52.464869 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.552385 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.552425 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.552435 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.552451 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.552463 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:52Z","lastTransitionTime":"2025-10-08T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.653447 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.653489 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.653502 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.653519 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.653533 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:52Z","lastTransitionTime":"2025-10-08T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:52 crc kubenswrapper[4624]: E1008 14:23:52.664654 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:52Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.668165 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.668196 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.668205 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.668217 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.668228 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:52Z","lastTransitionTime":"2025-10-08T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:52 crc kubenswrapper[4624]: E1008 14:23:52.679415 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:52Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.682519 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.682551 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.682560 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.682574 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.682583 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:52Z","lastTransitionTime":"2025-10-08T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:52 crc kubenswrapper[4624]: E1008 14:23:52.692380 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:52Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.695578 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.695624 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.695649 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.695667 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.695680 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:52Z","lastTransitionTime":"2025-10-08T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:52 crc kubenswrapper[4624]: E1008 14:23:52.706420 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:52Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.709701 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.709735 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.709746 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.709761 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.709771 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:52Z","lastTransitionTime":"2025-10-08T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:52 crc kubenswrapper[4624]: E1008 14:23:52.721599 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:52Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:52 crc kubenswrapper[4624]: E1008 14:23:52.721764 4624 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.723323 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.723364 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.723374 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.723393 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.723407 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:52Z","lastTransitionTime":"2025-10-08T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.825160 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.825197 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.825206 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.825220 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.825229 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:52Z","lastTransitionTime":"2025-10-08T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.927421 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.927451 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.927459 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.927472 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:52 crc kubenswrapper[4624]: I1008 14:23:52.927481 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:52Z","lastTransitionTime":"2025-10-08T14:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.029087 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.029141 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.029189 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.029204 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.029213 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:53Z","lastTransitionTime":"2025-10-08T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.131209 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.131276 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.131289 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.131307 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.131318 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:53Z","lastTransitionTime":"2025-10-08T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.233622 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.233672 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.233683 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.233699 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.233709 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:53Z","lastTransitionTime":"2025-10-08T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.336234 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.336273 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.336285 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.336301 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.336312 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:53Z","lastTransitionTime":"2025-10-08T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.438959 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.439009 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.439019 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.439035 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.439047 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:53Z","lastTransitionTime":"2025-10-08T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.465573 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.465805 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.465666 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:53 crc kubenswrapper[4624]: E1008 14:23:53.465808 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:53 crc kubenswrapper[4624]: E1008 14:23:53.465975 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:53 crc kubenswrapper[4624]: E1008 14:23:53.466066 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.541175 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.541215 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.541225 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.541238 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.541247 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:53Z","lastTransitionTime":"2025-10-08T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.643468 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.643521 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.643530 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.643544 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.643557 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:53Z","lastTransitionTime":"2025-10-08T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.745676 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.745707 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.745716 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.745729 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.745741 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:53Z","lastTransitionTime":"2025-10-08T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.847736 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.847769 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.847778 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.847793 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.847805 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:53Z","lastTransitionTime":"2025-10-08T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.950067 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.950094 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.950102 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.950115 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:53 crc kubenswrapper[4624]: I1008 14:23:53.950125 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:53Z","lastTransitionTime":"2025-10-08T14:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.052330 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.052354 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.052361 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.052397 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.052407 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:54Z","lastTransitionTime":"2025-10-08T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.154142 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.154167 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.154177 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.154190 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.154198 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:54Z","lastTransitionTime":"2025-10-08T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.256506 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.256567 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.256578 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.256611 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.256621 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:54Z","lastTransitionTime":"2025-10-08T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.358829 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.358869 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.358881 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.358896 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.358906 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:54Z","lastTransitionTime":"2025-10-08T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.460902 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.460968 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.460979 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.460994 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.461005 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:54Z","lastTransitionTime":"2025-10-08T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.464869 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:54 crc kubenswrapper[4624]: E1008 14:23:54.465030 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.562771 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.562802 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.562809 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.562824 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.562831 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:54Z","lastTransitionTime":"2025-10-08T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.664755 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.664788 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.664797 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.664810 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.664818 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:54Z","lastTransitionTime":"2025-10-08T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.766661 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.766689 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.766699 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.766715 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.766724 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:54Z","lastTransitionTime":"2025-10-08T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.869608 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.869656 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.869665 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.869678 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.869690 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:54Z","lastTransitionTime":"2025-10-08T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.970970 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.971006 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.971016 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.971031 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:54 crc kubenswrapper[4624]: I1008 14:23:54.971041 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:54Z","lastTransitionTime":"2025-10-08T14:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.073460 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.073495 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.073503 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.073515 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.073523 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:55Z","lastTransitionTime":"2025-10-08T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.175511 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.175551 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.175562 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.175578 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.175588 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:55Z","lastTransitionTime":"2025-10-08T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.277452 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.277481 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.277489 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.277501 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.277509 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:55Z","lastTransitionTime":"2025-10-08T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.379029 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.379057 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.379065 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.379078 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.379085 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:55Z","lastTransitionTime":"2025-10-08T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.465575 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:55 crc kubenswrapper[4624]: E1008 14:23:55.465715 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.465793 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:55 crc kubenswrapper[4624]: E1008 14:23:55.465866 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.465961 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:55 crc kubenswrapper[4624]: E1008 14:23:55.466362 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.476519 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:55Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.481526 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.481576 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.481610 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.481627 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.481651 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:55Z","lastTransitionTime":"2025-10-08T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.486269 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:55Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.494029 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8abf38af-8df3-49f9-9817-b4740d2a8b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrmz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:55Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.504152 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a8e0a93-2abf-4287-b5b4-14d480ab7808\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://920d41c8a17e549dbc9e474897693799091e5686977403a61bfef3b5373963ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66582c16b1a735d597ac77df3780cd199953116b97835aed248c275ff3eea23c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2ce3e7ea0415656d449e2f2b4677e2ff8654984ff2417e29fd6a240938eefc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:55Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.513278 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:55Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.524686 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630954dc6b9710e09d5ba88bf292a65247c48ce5109d02f7dfb3c2f268b0cb24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:55Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.535201 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:55Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.546996 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:55Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.557256 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:55Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.572219 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:55Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.582241 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:55Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.583876 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.583897 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.583905 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.583918 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.583927 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:55Z","lastTransitionTime":"2025-10-08T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.598407 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:49Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1008 14:23:49.136225 6274 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.136588 6274 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.136858 6274 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.137282 6274 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 14:23:49.137333 6274 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 14:23:49.137399 6274 factory.go:656] Stopping watch factory\\\\nI1008 14:23:49.137419 6274 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 14:23:49.137429 6274 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 14:23:49.168490 6274 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1008 14:23:49.168518 6274 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1008 14:23:49.168570 6274 ovnkube.go:599] Stopped ovnkube\\\\nI1008 14:23:49.168596 6274 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 14:23:49.168689 6274 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:55Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.607122 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:55Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.616768 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:55Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.626551 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:55Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.637289 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:55Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.649505 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:23:55Z is after 2025-08-24T17:21:41Z" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.686156 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.686195 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.686203 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.686218 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.686227 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:55Z","lastTransitionTime":"2025-10-08T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.787869 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.787905 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.787915 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.787930 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.787945 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:55Z","lastTransitionTime":"2025-10-08T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.889797 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.889834 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.889844 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.889859 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.889869 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:55Z","lastTransitionTime":"2025-10-08T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.992020 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.992056 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.992066 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.992084 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:55 crc kubenswrapper[4624]: I1008 14:23:55.992094 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:55Z","lastTransitionTime":"2025-10-08T14:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.093834 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.093865 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.093874 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.093888 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.093896 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:56Z","lastTransitionTime":"2025-10-08T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.196084 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.196118 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.196129 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.196147 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.196158 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:56Z","lastTransitionTime":"2025-10-08T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.298722 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.298985 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.299083 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.299202 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.299371 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:56Z","lastTransitionTime":"2025-10-08T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.403609 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.403672 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.403683 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.403699 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.403712 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:56Z","lastTransitionTime":"2025-10-08T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.465288 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:56 crc kubenswrapper[4624]: E1008 14:23:56.465431 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.506992 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.507032 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.507041 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.507055 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.507065 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:56Z","lastTransitionTime":"2025-10-08T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.609267 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.609307 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.609318 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.609333 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.609343 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:56Z","lastTransitionTime":"2025-10-08T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.712208 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.712274 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.712290 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.712316 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.712334 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:56Z","lastTransitionTime":"2025-10-08T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.815241 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.815283 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.815294 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.815308 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.815319 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:56Z","lastTransitionTime":"2025-10-08T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.918479 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.918535 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.918549 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.918581 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:56 crc kubenswrapper[4624]: I1008 14:23:56.918608 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:56Z","lastTransitionTime":"2025-10-08T14:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.023352 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.023405 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.023427 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.023448 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.023461 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:57Z","lastTransitionTime":"2025-10-08T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.126779 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.126833 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.126847 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.126868 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.126882 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:57Z","lastTransitionTime":"2025-10-08T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.229942 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.229984 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.229994 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.230007 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.230016 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:57Z","lastTransitionTime":"2025-10-08T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.331601 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.331655 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.331668 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.331683 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.331693 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:57Z","lastTransitionTime":"2025-10-08T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.434052 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.434085 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.434094 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.434108 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.434117 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:57Z","lastTransitionTime":"2025-10-08T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.464724 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:57 crc kubenswrapper[4624]: E1008 14:23:57.464859 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.464742 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.464896 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:57 crc kubenswrapper[4624]: E1008 14:23:57.464944 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:57 crc kubenswrapper[4624]: E1008 14:23:57.465030 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.536701 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.536725 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.536733 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.536745 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.536753 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:57Z","lastTransitionTime":"2025-10-08T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.638699 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.638763 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.638785 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.638813 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.638834 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:57Z","lastTransitionTime":"2025-10-08T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.741213 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.741243 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.741251 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.741264 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.741272 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:57Z","lastTransitionTime":"2025-10-08T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.843094 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.843126 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.843135 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.843150 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.843160 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:57Z","lastTransitionTime":"2025-10-08T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.946226 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.946280 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.946295 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.946322 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:57 crc kubenswrapper[4624]: I1008 14:23:57.946340 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:57Z","lastTransitionTime":"2025-10-08T14:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.049765 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.049805 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.049816 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.049835 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.049848 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:58Z","lastTransitionTime":"2025-10-08T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.152054 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.152088 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.152096 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.152109 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.152118 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:58Z","lastTransitionTime":"2025-10-08T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.254721 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.254760 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.254772 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.254787 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.254797 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:58Z","lastTransitionTime":"2025-10-08T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.357580 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.357670 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.357682 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.357699 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.357710 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:58Z","lastTransitionTime":"2025-10-08T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.459938 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.459965 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.459974 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.459985 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.459993 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:58Z","lastTransitionTime":"2025-10-08T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.464594 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:23:58 crc kubenswrapper[4624]: E1008 14:23:58.464693 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.562607 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.562661 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.562670 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.562683 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.562691 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:58Z","lastTransitionTime":"2025-10-08T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.665328 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.665359 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.665368 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.665380 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.665388 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:58Z","lastTransitionTime":"2025-10-08T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.767205 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.767236 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.767272 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.767290 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.767301 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:58Z","lastTransitionTime":"2025-10-08T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.869176 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.869231 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.869242 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.869257 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.869268 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:58Z","lastTransitionTime":"2025-10-08T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.972093 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.972132 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.972141 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.972157 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:58 crc kubenswrapper[4624]: I1008 14:23:58.972166 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:58Z","lastTransitionTime":"2025-10-08T14:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.074082 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.074147 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.074157 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.074170 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.074179 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:59Z","lastTransitionTime":"2025-10-08T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.176583 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.176620 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.176653 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.176668 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.176677 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:59Z","lastTransitionTime":"2025-10-08T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.278292 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.278337 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.278348 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.278363 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.278375 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:59Z","lastTransitionTime":"2025-10-08T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.381302 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.381357 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.381371 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.381387 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.381398 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:59Z","lastTransitionTime":"2025-10-08T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.464862 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.464878 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.464937 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:23:59 crc kubenswrapper[4624]: E1008 14:23:59.465068 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:23:59 crc kubenswrapper[4624]: E1008 14:23:59.465353 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:23:59 crc kubenswrapper[4624]: E1008 14:23:59.465562 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.483655 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.483696 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.483709 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.483729 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.483741 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:59Z","lastTransitionTime":"2025-10-08T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.586770 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.586813 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.586824 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.586840 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.586854 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:59Z","lastTransitionTime":"2025-10-08T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.689110 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.689156 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.689168 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.689186 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.689198 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:59Z","lastTransitionTime":"2025-10-08T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.791691 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.791726 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.791736 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.791752 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.791763 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:59Z","lastTransitionTime":"2025-10-08T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.894287 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.894335 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.894348 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.894368 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.894378 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:59Z","lastTransitionTime":"2025-10-08T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.996354 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.996397 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.996409 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.996425 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:23:59 crc kubenswrapper[4624]: I1008 14:23:59.996436 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:23:59Z","lastTransitionTime":"2025-10-08T14:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.098539 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.098580 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.098589 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.098605 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.098623 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:00Z","lastTransitionTime":"2025-10-08T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.200557 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.200604 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.200619 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.200652 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.200666 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:00Z","lastTransitionTime":"2025-10-08T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.302814 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.302845 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.302854 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.302868 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.302878 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:00Z","lastTransitionTime":"2025-10-08T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.405357 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.405384 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.405393 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.405406 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.405417 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:00Z","lastTransitionTime":"2025-10-08T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.465135 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:00 crc kubenswrapper[4624]: E1008 14:24:00.465250 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.507899 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.507928 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.507936 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.507948 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.507982 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:00Z","lastTransitionTime":"2025-10-08T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.609907 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.609953 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.609968 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.609985 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.609997 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:00Z","lastTransitionTime":"2025-10-08T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.712513 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.712544 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.712552 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.712588 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.712598 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:00Z","lastTransitionTime":"2025-10-08T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.814791 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.814852 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.814863 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.814881 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.814891 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:00Z","lastTransitionTime":"2025-10-08T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.917193 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.917225 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.917236 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.917253 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:00 crc kubenswrapper[4624]: I1008 14:24:00.917266 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:00Z","lastTransitionTime":"2025-10-08T14:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.018899 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.018937 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.018946 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.018961 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.018971 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:01Z","lastTransitionTime":"2025-10-08T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.121265 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.121300 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.121309 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.121325 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.121334 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:01Z","lastTransitionTime":"2025-10-08T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.223437 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.223466 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.223473 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.223486 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.223495 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:01Z","lastTransitionTime":"2025-10-08T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.325759 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.325799 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.325809 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.325822 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.325832 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:01Z","lastTransitionTime":"2025-10-08T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.428262 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.428289 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.428298 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.428311 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.428320 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:01Z","lastTransitionTime":"2025-10-08T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.466803 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:01 crc kubenswrapper[4624]: E1008 14:24:01.466924 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.466805 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.466977 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:01 crc kubenswrapper[4624]: E1008 14:24:01.467022 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:01 crc kubenswrapper[4624]: E1008 14:24:01.467083 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.530676 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.530707 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.530717 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.530748 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.530758 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:01Z","lastTransitionTime":"2025-10-08T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.633532 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.633558 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.633568 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.633580 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.633590 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:01Z","lastTransitionTime":"2025-10-08T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.735445 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.735471 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.735479 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.735491 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.735499 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:01Z","lastTransitionTime":"2025-10-08T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.837169 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.837200 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.837211 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.837225 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.837235 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:01Z","lastTransitionTime":"2025-10-08T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.940955 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.940994 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.941004 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.941021 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:01 crc kubenswrapper[4624]: I1008 14:24:01.941031 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:01Z","lastTransitionTime":"2025-10-08T14:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.043570 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.043601 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.043609 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.043622 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.043649 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:02Z","lastTransitionTime":"2025-10-08T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.137374 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs\") pod \"network-metrics-daemon-qrmz6\" (UID: \"8abf38af-8df3-49f9-9817-b4740d2a8b4a\") " pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:02 crc kubenswrapper[4624]: E1008 14:24:02.137523 4624 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 14:24:02 crc kubenswrapper[4624]: E1008 14:24:02.137688 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs podName:8abf38af-8df3-49f9-9817-b4740d2a8b4a nodeName:}" failed. No retries permitted until 2025-10-08 14:24:34.137668767 +0000 UTC m=+99.288603854 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs") pod "network-metrics-daemon-qrmz6" (UID: "8abf38af-8df3-49f9-9817-b4740d2a8b4a") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.146228 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.146257 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.146267 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.146284 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.146294 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:02Z","lastTransitionTime":"2025-10-08T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.248884 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.248921 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.248933 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.248950 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.248960 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:02Z","lastTransitionTime":"2025-10-08T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.350583 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.350611 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.350621 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.350651 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.350662 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:02Z","lastTransitionTime":"2025-10-08T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.453303 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.453330 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.453338 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.453350 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.453359 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:02Z","lastTransitionTime":"2025-10-08T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.465585 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:02 crc kubenswrapper[4624]: E1008 14:24:02.465738 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.555334 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.555373 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.555383 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.555397 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.555409 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:02Z","lastTransitionTime":"2025-10-08T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.657857 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.657888 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.657897 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.657911 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.657922 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:02Z","lastTransitionTime":"2025-10-08T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.759595 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.759624 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.759651 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.759664 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.759673 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:02Z","lastTransitionTime":"2025-10-08T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.861532 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.861570 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.861581 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.861596 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.861606 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:02Z","lastTransitionTime":"2025-10-08T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.883110 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.883160 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.883172 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.883186 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.883195 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:02Z","lastTransitionTime":"2025-10-08T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:02 crc kubenswrapper[4624]: E1008 14:24:02.894916 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:02Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.897539 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.897573 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.897585 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.897600 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.897610 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:02Z","lastTransitionTime":"2025-10-08T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:02 crc kubenswrapper[4624]: E1008 14:24:02.909106 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:02Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.911817 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.911851 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.911865 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.911880 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.911889 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:02Z","lastTransitionTime":"2025-10-08T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:02 crc kubenswrapper[4624]: E1008 14:24:02.921747 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:02Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.924289 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.924322 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.924333 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.924346 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.924354 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:02Z","lastTransitionTime":"2025-10-08T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:02 crc kubenswrapper[4624]: E1008 14:24:02.935850 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:02Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.938973 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.939001 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.939013 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.939026 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.939035 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:02Z","lastTransitionTime":"2025-10-08T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:02 crc kubenswrapper[4624]: E1008 14:24:02.950259 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:02Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:02 crc kubenswrapper[4624]: E1008 14:24:02.950422 4624 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.963251 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.963284 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.963294 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.963307 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:02 crc kubenswrapper[4624]: I1008 14:24:02.963316 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:02Z","lastTransitionTime":"2025-10-08T14:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.066985 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.067022 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.067033 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.067050 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.067062 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:03Z","lastTransitionTime":"2025-10-08T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.168649 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.168690 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.168699 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.168716 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.168726 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:03Z","lastTransitionTime":"2025-10-08T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.270954 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.270989 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.270998 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.271012 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.271021 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:03Z","lastTransitionTime":"2025-10-08T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.373528 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.373563 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.373572 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.373586 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.373595 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:03Z","lastTransitionTime":"2025-10-08T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.465472 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.465543 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:03 crc kubenswrapper[4624]: E1008 14:24:03.465616 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.465471 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:03 crc kubenswrapper[4624]: E1008 14:24:03.465683 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:03 crc kubenswrapper[4624]: E1008 14:24:03.465767 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.475009 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.475036 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.475044 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.475054 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.475063 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:03Z","lastTransitionTime":"2025-10-08T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.577563 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.577622 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.577655 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.577675 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.577688 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:03Z","lastTransitionTime":"2025-10-08T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.679785 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.679830 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.679838 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.679853 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.679865 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:03Z","lastTransitionTime":"2025-10-08T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.781799 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.781826 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.781834 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.781849 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.781857 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:03Z","lastTransitionTime":"2025-10-08T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.884201 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.884239 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.884273 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.884291 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.884301 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:03Z","lastTransitionTime":"2025-10-08T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.986766 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.986808 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.986819 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.986834 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:03 crc kubenswrapper[4624]: I1008 14:24:03.986845 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:03Z","lastTransitionTime":"2025-10-08T14:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.088627 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.088685 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.088696 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.088713 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.088724 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:04Z","lastTransitionTime":"2025-10-08T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.190708 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.190744 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.190756 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.190770 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.190782 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:04Z","lastTransitionTime":"2025-10-08T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.293477 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.293512 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.293523 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.293542 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.293555 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:04Z","lastTransitionTime":"2025-10-08T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.396102 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.396133 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.396143 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.396158 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.396170 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:04Z","lastTransitionTime":"2025-10-08T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.466068 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:04 crc kubenswrapper[4624]: E1008 14:24:04.466184 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.498070 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.498101 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.498110 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.498122 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.498131 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:04Z","lastTransitionTime":"2025-10-08T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.600968 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.601010 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.601022 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.601040 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.601053 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:04Z","lastTransitionTime":"2025-10-08T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.703892 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.703936 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.703948 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.703972 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.703986 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:04Z","lastTransitionTime":"2025-10-08T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.806082 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.806116 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.806131 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.806146 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.806158 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:04Z","lastTransitionTime":"2025-10-08T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.856544 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-47hzf_48aee8dd-6063-4d3c-b65a-f37ce1ccdb82/kube-multus/0.log" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.856579 4624 generic.go:334] "Generic (PLEG): container finished" podID="48aee8dd-6063-4d3c-b65a-f37ce1ccdb82" containerID="c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b" exitCode=1 Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.856603 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-47hzf" event={"ID":"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82","Type":"ContainerDied","Data":"c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b"} Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.857072 4624 scope.go:117] "RemoveContainer" containerID="c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.872206 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:04Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.883853 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630954dc6b9710e09d5ba88bf292a65247c48ce5109d02f7dfb3c2f268b0cb24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:04Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.894184 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:04Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.904924 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:04Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.910074 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.910100 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.910108 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.910122 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.910131 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:04Z","lastTransitionTime":"2025-10-08T14:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.917975 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:04Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.933938 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:04Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.949437 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:24:04Z\\\",\\\"message\\\":\\\"2025-10-08T14:23:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8c8c904-008d-47bb-b65b-6e2a0b3a6d8c\\\\n2025-10-08T14:23:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8c8c904-008d-47bb-b65b-6e2a0b3a6d8c to /host/opt/cni/bin/\\\\n2025-10-08T14:23:19Z [verbose] multus-daemon started\\\\n2025-10-08T14:23:19Z [verbose] Readiness Indicator file check\\\\n2025-10-08T14:24:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:04Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.973268 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:49Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1008 14:23:49.136225 6274 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.136588 6274 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.136858 6274 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.137282 6274 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 14:23:49.137333 6274 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 14:23:49.137399 6274 factory.go:656] Stopping watch factory\\\\nI1008 14:23:49.137419 6274 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 14:23:49.137429 6274 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 14:23:49.168490 6274 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1008 14:23:49.168518 6274 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1008 14:23:49.168570 6274 ovnkube.go:599] Stopped ovnkube\\\\nI1008 14:23:49.168596 6274 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 14:23:49.168689 6274 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:04Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.984591 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:04Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:04 crc kubenswrapper[4624]: I1008 14:24:04.996188 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:04Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.007772 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.012031 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.012053 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.012060 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.012072 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.012080 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:05Z","lastTransitionTime":"2025-10-08T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.018833 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.029522 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.041875 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.053064 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.062964 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8abf38af-8df3-49f9-9817-b4740d2a8b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrmz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.072653 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a8e0a93-2abf-4287-b5b4-14d480ab7808\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://920d41c8a17e549dbc9e474897693799091e5686977403a61bfef3b5373963ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66582c16b1a735d597ac77df3780cd199953116b97835aed248c275ff3eea23c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2ce3e7ea0415656d449e2f2b4677e2ff8654984ff2417e29fd6a240938eefc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.114505 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.114551 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.114562 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.114579 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.114590 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:05Z","lastTransitionTime":"2025-10-08T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.216925 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.216957 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.216967 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.216982 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.216992 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:05Z","lastTransitionTime":"2025-10-08T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.319608 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.319658 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.319667 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.319682 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.319692 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:05Z","lastTransitionTime":"2025-10-08T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.422517 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.422545 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.422554 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.422568 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.422579 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:05Z","lastTransitionTime":"2025-10-08T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.465077 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.465094 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.465465 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:05 crc kubenswrapper[4624]: E1008 14:24:05.465549 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.465722 4624 scope.go:117] "RemoveContainer" containerID="d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d" Oct 08 14:24:05 crc kubenswrapper[4624]: E1008 14:24:05.465744 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:05 crc kubenswrapper[4624]: E1008 14:24:05.465860 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" Oct 08 14:24:05 crc kubenswrapper[4624]: E1008 14:24:05.465890 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.475229 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.484347 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.497547 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.510281 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.522998 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:24:04Z\\\",\\\"message\\\":\\\"2025-10-08T14:23:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8c8c904-008d-47bb-b65b-6e2a0b3a6d8c\\\\n2025-10-08T14:23:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8c8c904-008d-47bb-b65b-6e2a0b3a6d8c to /host/opt/cni/bin/\\\\n2025-10-08T14:23:19Z [verbose] multus-daemon started\\\\n2025-10-08T14:23:19Z [verbose] Readiness Indicator file check\\\\n2025-10-08T14:24:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.524390 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.524430 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.524444 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.524460 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.524469 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:05Z","lastTransitionTime":"2025-10-08T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.545280 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:49Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1008 14:23:49.136225 6274 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.136588 6274 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.136858 6274 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.137282 6274 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 14:23:49.137333 6274 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 14:23:49.137399 6274 factory.go:656] Stopping watch factory\\\\nI1008 14:23:49.137419 6274 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 14:23:49.137429 6274 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 14:23:49.168490 6274 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1008 14:23:49.168518 6274 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1008 14:23:49.168570 6274 ovnkube.go:599] Stopped ovnkube\\\\nI1008 14:23:49.168596 6274 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 14:23:49.168689 6274 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.559909 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.571331 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8abf38af-8df3-49f9-9817-b4740d2a8b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrmz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.582719 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a8e0a93-2abf-4287-b5b4-14d480ab7808\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://920d41c8a17e549dbc9e474897693799091e5686977403a61bfef3b5373963ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66582c16b1a735d597ac77df3780cd199953116b97835aed248c275ff3eea23c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2ce3e7ea0415656d449e2f2b4677e2ff8654984ff2417e29fd6a240938eefc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.593114 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.606057 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.617461 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630954dc6b9710e09d5ba88bf292a65247c48ce5109d02f7dfb3c2f268b0cb24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.627226 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.627729 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.627756 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.627767 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.627783 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.627793 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:05Z","lastTransitionTime":"2025-10-08T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.639958 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.652370 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.661565 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.671830 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.729701 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.729734 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.729745 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.729760 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.729772 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:05Z","lastTransitionTime":"2025-10-08T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.831715 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.831745 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.831755 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.831770 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.831780 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:05Z","lastTransitionTime":"2025-10-08T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.861177 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-47hzf_48aee8dd-6063-4d3c-b65a-f37ce1ccdb82/kube-multus/0.log" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.861230 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-47hzf" event={"ID":"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82","Type":"ContainerStarted","Data":"1f3de1c5bf27d90b3b9f4560a59ec4a6153df4d625d85bda278779415ae96343"} Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.874256 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630954dc6b9710e09d5ba88bf292a65247c48ce5109d02f7dfb3c2f268b0cb24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.885030 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.899453 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.913222 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.923006 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.933844 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.935040 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.935196 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.935212 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.935224 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.935234 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:05Z","lastTransitionTime":"2025-10-08T14:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.944114 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.956314 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.967867 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.978656 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f3de1c5bf27d90b3b9f4560a59ec4a6153df4d625d85bda278779415ae96343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:24:04Z\\\",\\\"message\\\":\\\"2025-10-08T14:23:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8c8c904-008d-47bb-b65b-6e2a0b3a6d8c\\\\n2025-10-08T14:23:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8c8c904-008d-47bb-b65b-6e2a0b3a6d8c to /host/opt/cni/bin/\\\\n2025-10-08T14:23:19Z [verbose] multus-daemon started\\\\n2025-10-08T14:23:19Z [verbose] Readiness Indicator file check\\\\n2025-10-08T14:24:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:05 crc kubenswrapper[4624]: I1008 14:24:05.995362 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:49Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1008 14:23:49.136225 6274 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.136588 6274 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.136858 6274 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.137282 6274 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 14:23:49.137333 6274 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 14:23:49.137399 6274 factory.go:656] Stopping watch factory\\\\nI1008 14:23:49.137419 6274 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 14:23:49.137429 6274 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 14:23:49.168490 6274 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1008 14:23:49.168518 6274 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1008 14:23:49.168570 6274 ovnkube.go:599] Stopped ovnkube\\\\nI1008 14:23:49.168596 6274 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 14:23:49.168689 6274 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:05Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.004799 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:06Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.035748 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8abf38af-8df3-49f9-9817-b4740d2a8b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrmz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:06Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.037910 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.037952 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.037966 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.037987 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.038233 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:06Z","lastTransitionTime":"2025-10-08T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.058752 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a8e0a93-2abf-4287-b5b4-14d480ab7808\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://920d41c8a17e549dbc9e474897693799091e5686977403a61bfef3b5373963ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66582c16b1a735d597ac77df3780cd199953116b97835aed248c275ff3eea23c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2ce3e7ea0415656d449e2f2b4677e2ff8654984ff2417e29fd6a240938eefc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:06Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.076765 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:06Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.092995 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:06Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.102285 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:06Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.139862 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.139900 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.139914 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.139929 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.139941 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:06Z","lastTransitionTime":"2025-10-08T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.241726 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.241767 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.241775 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.241790 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.241801 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:06Z","lastTransitionTime":"2025-10-08T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.343713 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.343978 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.344101 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.344194 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.344286 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:06Z","lastTransitionTime":"2025-10-08T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.446659 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.446697 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.446706 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.446721 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.446733 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:06Z","lastTransitionTime":"2025-10-08T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.465080 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:06 crc kubenswrapper[4624]: E1008 14:24:06.465212 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.548696 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.549223 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.549417 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.549573 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.549754 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:06Z","lastTransitionTime":"2025-10-08T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.652061 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.652340 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.652422 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.652515 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.652591 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:06Z","lastTransitionTime":"2025-10-08T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.754913 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.755163 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.755282 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.755379 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.755458 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:06Z","lastTransitionTime":"2025-10-08T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.858065 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.858273 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.858376 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.858450 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.858565 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:06Z","lastTransitionTime":"2025-10-08T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.960659 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.960693 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.960703 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.960720 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:06 crc kubenswrapper[4624]: I1008 14:24:06.960731 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:06Z","lastTransitionTime":"2025-10-08T14:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.063125 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.063175 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.063184 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.063196 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.063204 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:07Z","lastTransitionTime":"2025-10-08T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.165507 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.165541 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.165552 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.165568 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.165578 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:07Z","lastTransitionTime":"2025-10-08T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.268118 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.268160 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.268170 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.268187 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.268199 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:07Z","lastTransitionTime":"2025-10-08T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.370897 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.371417 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.371515 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.371601 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.371711 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:07Z","lastTransitionTime":"2025-10-08T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.464972 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.465009 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.464985 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:07 crc kubenswrapper[4624]: E1008 14:24:07.465197 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:07 crc kubenswrapper[4624]: E1008 14:24:07.465926 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:07 crc kubenswrapper[4624]: E1008 14:24:07.466165 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.474740 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.474797 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.474806 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.474820 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.474830 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:07Z","lastTransitionTime":"2025-10-08T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.577355 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.577382 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.577391 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.577403 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.577411 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:07Z","lastTransitionTime":"2025-10-08T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.679765 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.679987 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.680096 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.680210 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.680318 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:07Z","lastTransitionTime":"2025-10-08T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.782138 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.782188 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.782197 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.782209 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.782217 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:07Z","lastTransitionTime":"2025-10-08T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.884253 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.884294 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.884304 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.884318 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.884329 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:07Z","lastTransitionTime":"2025-10-08T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.986428 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.986460 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.986468 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.986482 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:07 crc kubenswrapper[4624]: I1008 14:24:07.986491 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:07Z","lastTransitionTime":"2025-10-08T14:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.088562 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.088604 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.088612 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.088625 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.088650 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:08Z","lastTransitionTime":"2025-10-08T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.190702 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.190732 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.190741 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.190758 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.190768 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:08Z","lastTransitionTime":"2025-10-08T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.293199 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.293261 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.293272 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.293288 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.293298 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:08Z","lastTransitionTime":"2025-10-08T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.396550 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.396596 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.396616 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.396651 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.396665 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:08Z","lastTransitionTime":"2025-10-08T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.465189 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:08 crc kubenswrapper[4624]: E1008 14:24:08.465395 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.499160 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.499191 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.499199 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.499212 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.499220 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:08Z","lastTransitionTime":"2025-10-08T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.602046 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.602084 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.602096 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.602113 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.602127 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:08Z","lastTransitionTime":"2025-10-08T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.704417 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.704447 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.704456 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.704469 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.704477 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:08Z","lastTransitionTime":"2025-10-08T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.806650 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.806681 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.806693 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.806715 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.806736 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:08Z","lastTransitionTime":"2025-10-08T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.909260 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.909296 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.909308 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.909322 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:08 crc kubenswrapper[4624]: I1008 14:24:08.909333 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:08Z","lastTransitionTime":"2025-10-08T14:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.011458 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.011502 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.011516 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.011532 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.011540 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:09Z","lastTransitionTime":"2025-10-08T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.113743 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.113784 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.113794 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.113808 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.113817 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:09Z","lastTransitionTime":"2025-10-08T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.215933 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.215959 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.215967 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.215979 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.215987 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:09Z","lastTransitionTime":"2025-10-08T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.318706 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.318737 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.318744 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.318758 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.318767 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:09Z","lastTransitionTime":"2025-10-08T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.421253 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.421323 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.421333 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.421345 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.421355 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:09Z","lastTransitionTime":"2025-10-08T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.465032 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.465093 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.465130 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:09 crc kubenswrapper[4624]: E1008 14:24:09.465164 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:09 crc kubenswrapper[4624]: E1008 14:24:09.465289 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:09 crc kubenswrapper[4624]: E1008 14:24:09.465352 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.523251 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.523279 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.523286 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.523299 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.523307 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:09Z","lastTransitionTime":"2025-10-08T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.625562 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.625594 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.625602 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.625615 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.625626 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:09Z","lastTransitionTime":"2025-10-08T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.728150 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.728204 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.728220 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.728236 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.728248 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:09Z","lastTransitionTime":"2025-10-08T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.831118 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.831149 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.831179 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.831194 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.831205 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:09Z","lastTransitionTime":"2025-10-08T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.932915 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.932953 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.932964 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.932979 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:09 crc kubenswrapper[4624]: I1008 14:24:09.932990 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:09Z","lastTransitionTime":"2025-10-08T14:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.035231 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.035265 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.035275 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.035291 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.035301 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:10Z","lastTransitionTime":"2025-10-08T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.138472 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.138515 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.138527 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.138546 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.138564 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:10Z","lastTransitionTime":"2025-10-08T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.241876 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.241938 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.241951 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.241969 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.241985 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:10Z","lastTransitionTime":"2025-10-08T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.348197 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.348234 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.348243 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.348257 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.348267 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:10Z","lastTransitionTime":"2025-10-08T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.450867 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.450905 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.450917 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.450930 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.450940 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:10Z","lastTransitionTime":"2025-10-08T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.465205 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:10 crc kubenswrapper[4624]: E1008 14:24:10.465320 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.552715 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.552742 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.552750 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.552763 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.552771 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:10Z","lastTransitionTime":"2025-10-08T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.655782 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.655815 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.655826 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.655841 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.655852 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:10Z","lastTransitionTime":"2025-10-08T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.758399 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.758439 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.758562 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.758584 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.758593 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:10Z","lastTransitionTime":"2025-10-08T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.860845 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.860888 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.860901 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.860917 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.860928 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:10Z","lastTransitionTime":"2025-10-08T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.962626 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.962670 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.962680 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.962779 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:10 crc kubenswrapper[4624]: I1008 14:24:10.962794 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:10Z","lastTransitionTime":"2025-10-08T14:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.065883 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.065939 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.065949 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.065966 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.066002 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:11Z","lastTransitionTime":"2025-10-08T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.169067 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.169101 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.169113 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.169130 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.169143 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:11Z","lastTransitionTime":"2025-10-08T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.271681 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.271715 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.271724 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.271739 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.271749 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:11Z","lastTransitionTime":"2025-10-08T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.373893 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.373931 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.373940 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.373957 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.373970 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:11Z","lastTransitionTime":"2025-10-08T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.465464 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:11 crc kubenswrapper[4624]: E1008 14:24:11.465672 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.465507 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:11 crc kubenswrapper[4624]: E1008 14:24:11.465749 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.465473 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:11 crc kubenswrapper[4624]: E1008 14:24:11.465800 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.475605 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.475654 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.475662 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.475675 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.475684 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:11Z","lastTransitionTime":"2025-10-08T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.577846 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.577888 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.577899 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.577916 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.577927 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:11Z","lastTransitionTime":"2025-10-08T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.680172 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.680203 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.680212 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.680226 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.680235 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:11Z","lastTransitionTime":"2025-10-08T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.782220 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.782252 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.782260 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.782274 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.782283 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:11Z","lastTransitionTime":"2025-10-08T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.884201 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.884231 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.884240 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.884252 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.884261 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:11Z","lastTransitionTime":"2025-10-08T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.986414 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.986453 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.986464 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.986482 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:11 crc kubenswrapper[4624]: I1008 14:24:11.986494 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:11Z","lastTransitionTime":"2025-10-08T14:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.088739 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.088782 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.088793 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.088807 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.088818 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:12Z","lastTransitionTime":"2025-10-08T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.191000 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.191033 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.191042 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.191055 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.191065 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:12Z","lastTransitionTime":"2025-10-08T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.293372 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.293406 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.293415 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.293430 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.293440 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:12Z","lastTransitionTime":"2025-10-08T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.395611 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.395701 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.395719 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.395760 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.395771 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:12Z","lastTransitionTime":"2025-10-08T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.464924 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:12 crc kubenswrapper[4624]: E1008 14:24:12.465077 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.497707 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.497735 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.497743 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.497756 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.497770 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:12Z","lastTransitionTime":"2025-10-08T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.599744 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.599778 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.599786 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.599798 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.599806 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:12Z","lastTransitionTime":"2025-10-08T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.702695 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.702734 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.702742 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.702755 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.702763 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:12Z","lastTransitionTime":"2025-10-08T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.804834 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.804876 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.804889 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.804904 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.804920 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:12Z","lastTransitionTime":"2025-10-08T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.907444 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.907473 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.907483 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.907498 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:12 crc kubenswrapper[4624]: I1008 14:24:12.907510 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:12Z","lastTransitionTime":"2025-10-08T14:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.010312 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.010365 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.010378 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.010394 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.010403 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:13Z","lastTransitionTime":"2025-10-08T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.116477 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.116523 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.116535 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.116553 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.116564 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:13Z","lastTransitionTime":"2025-10-08T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.218671 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.218707 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.218729 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.218745 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.218755 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:13Z","lastTransitionTime":"2025-10-08T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.320944 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.320981 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.320990 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.321004 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.321014 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:13Z","lastTransitionTime":"2025-10-08T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.347476 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.347516 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.347524 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.347540 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.347551 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:13Z","lastTransitionTime":"2025-10-08T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:13 crc kubenswrapper[4624]: E1008 14:24:13.360222 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:13Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.363688 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.363727 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.363739 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.363755 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.363766 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:13Z","lastTransitionTime":"2025-10-08T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:13 crc kubenswrapper[4624]: E1008 14:24:13.374488 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:13Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.377720 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.377752 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.377761 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.377774 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.377783 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:13Z","lastTransitionTime":"2025-10-08T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:13 crc kubenswrapper[4624]: E1008 14:24:13.389065 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:13Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.391964 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.391999 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.392007 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.392022 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.392031 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:13Z","lastTransitionTime":"2025-10-08T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:13 crc kubenswrapper[4624]: E1008 14:24:13.402667 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:13Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.405539 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.405563 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.405571 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.405586 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.405596 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:13Z","lastTransitionTime":"2025-10-08T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:13 crc kubenswrapper[4624]: E1008 14:24:13.416414 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:13Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:13 crc kubenswrapper[4624]: E1008 14:24:13.416528 4624 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.423265 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.423299 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.423310 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.423324 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.423334 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:13Z","lastTransitionTime":"2025-10-08T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.465131 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:13 crc kubenswrapper[4624]: E1008 14:24:13.465235 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.465391 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:13 crc kubenswrapper[4624]: E1008 14:24:13.465437 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.465689 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:13 crc kubenswrapper[4624]: E1008 14:24:13.465749 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.524755 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.524793 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.524802 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.524816 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.524826 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:13Z","lastTransitionTime":"2025-10-08T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.626771 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.626823 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.626832 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.626844 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.626852 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:13Z","lastTransitionTime":"2025-10-08T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.728991 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.729023 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.729032 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.729045 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.729054 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:13Z","lastTransitionTime":"2025-10-08T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.831591 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.831619 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.831627 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.831721 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.831732 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:13Z","lastTransitionTime":"2025-10-08T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.933897 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.933932 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.933940 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.933952 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:13 crc kubenswrapper[4624]: I1008 14:24:13.933960 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:13Z","lastTransitionTime":"2025-10-08T14:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.036099 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.036144 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.036157 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.036174 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.036186 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:14Z","lastTransitionTime":"2025-10-08T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.138191 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.138213 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.138221 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.138235 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.138244 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:14Z","lastTransitionTime":"2025-10-08T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.240471 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.240501 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.240509 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.240521 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.240530 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:14Z","lastTransitionTime":"2025-10-08T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.343192 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.343257 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.343270 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.343286 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.343318 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:14Z","lastTransitionTime":"2025-10-08T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.445277 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.445507 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.445517 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.445532 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.445542 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:14Z","lastTransitionTime":"2025-10-08T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.464696 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:14 crc kubenswrapper[4624]: E1008 14:24:14.464854 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.547205 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.547236 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.547244 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.547257 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.547267 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:14Z","lastTransitionTime":"2025-10-08T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.650085 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.650114 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.650124 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.650139 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.650150 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:14Z","lastTransitionTime":"2025-10-08T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.751960 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.751989 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.752000 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.752015 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.752025 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:14Z","lastTransitionTime":"2025-10-08T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.857292 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.857341 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.857353 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.857369 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.857387 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:14Z","lastTransitionTime":"2025-10-08T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.960384 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.960425 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.960434 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.960450 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:14 crc kubenswrapper[4624]: I1008 14:24:14.960459 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:14Z","lastTransitionTime":"2025-10-08T14:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.062566 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.062606 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.062617 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.062695 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.062713 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:15Z","lastTransitionTime":"2025-10-08T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.164389 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.164416 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.164424 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.164436 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.164444 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:15Z","lastTransitionTime":"2025-10-08T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.266662 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.266697 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.266706 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.266720 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.266730 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:15Z","lastTransitionTime":"2025-10-08T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.369051 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.369087 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.369104 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.369307 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.369324 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:15Z","lastTransitionTime":"2025-10-08T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.465162 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:15 crc kubenswrapper[4624]: E1008 14:24:15.465288 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.465402 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.465460 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:15 crc kubenswrapper[4624]: E1008 14:24:15.465516 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:15 crc kubenswrapper[4624]: E1008 14:24:15.465705 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.470823 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.470861 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.470870 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.470907 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.470928 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:15Z","lastTransitionTime":"2025-10-08T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.475928 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:15Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.487593 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:15Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.497962 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:15Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.507014 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8abf38af-8df3-49f9-9817-b4740d2a8b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrmz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:15Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.516333 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a8e0a93-2abf-4287-b5b4-14d480ab7808\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://920d41c8a17e549dbc9e474897693799091e5686977403a61bfef3b5373963ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66582c16b1a735d597ac77df3780cd199953116b97835aed248c275ff3eea23c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2ce3e7ea0415656d449e2f2b4677e2ff8654984ff2417e29fd6a240938eefc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:15Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.529069 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:15Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.543905 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630954dc6b9710e09d5ba88bf292a65247c48ce5109d02f7dfb3c2f268b0cb24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:15Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.554745 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:15Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.567039 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:15Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.573928 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.573978 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.573991 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.574008 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.574020 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:15Z","lastTransitionTime":"2025-10-08T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.582854 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:15Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.594766 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:15Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.606465 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f3de1c5bf27d90b3b9f4560a59ec4a6153df4d625d85bda278779415ae96343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:24:04Z\\\",\\\"message\\\":\\\"2025-10-08T14:23:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8c8c904-008d-47bb-b65b-6e2a0b3a6d8c\\\\n2025-10-08T14:23:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8c8c904-008d-47bb-b65b-6e2a0b3a6d8c to /host/opt/cni/bin/\\\\n2025-10-08T14:23:19Z [verbose] multus-daemon started\\\\n2025-10-08T14:23:19Z [verbose] Readiness Indicator file check\\\\n2025-10-08T14:24:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:15Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.623588 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:49Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1008 14:23:49.136225 6274 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.136588 6274 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.136858 6274 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.137282 6274 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 14:23:49.137333 6274 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 14:23:49.137399 6274 factory.go:656] Stopping watch factory\\\\nI1008 14:23:49.137419 6274 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 14:23:49.137429 6274 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 14:23:49.168490 6274 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1008 14:23:49.168518 6274 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1008 14:23:49.168570 6274 ovnkube.go:599] Stopped ovnkube\\\\nI1008 14:23:49.168596 6274 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 14:23:49.168689 6274 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:15Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.633942 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:15Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.643479 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:15Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.658683 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:15Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.669980 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:15Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.676271 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.676379 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.676608 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.676787 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.677000 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:15Z","lastTransitionTime":"2025-10-08T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.779239 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.779457 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.779545 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.779674 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.779742 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:15Z","lastTransitionTime":"2025-10-08T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.881993 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.882035 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.882047 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.882063 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.882074 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:15Z","lastTransitionTime":"2025-10-08T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.984800 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.984836 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.984847 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.984862 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:15 crc kubenswrapper[4624]: I1008 14:24:15.984872 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:15Z","lastTransitionTime":"2025-10-08T14:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.087318 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.087730 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.087851 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.087984 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.088115 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:16Z","lastTransitionTime":"2025-10-08T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.190603 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.190873 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.190941 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.191022 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.191092 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:16Z","lastTransitionTime":"2025-10-08T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.293693 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.293971 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.294184 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.294318 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.294424 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:16Z","lastTransitionTime":"2025-10-08T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.398277 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.398793 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.398895 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.398981 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.399063 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:16Z","lastTransitionTime":"2025-10-08T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.464967 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:16 crc kubenswrapper[4624]: E1008 14:24:16.465337 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.501115 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.501180 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.501192 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.501204 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.501213 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:16Z","lastTransitionTime":"2025-10-08T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.603509 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.603784 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.603877 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.603942 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.604014 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:16Z","lastTransitionTime":"2025-10-08T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.707032 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.707057 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.707066 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.707078 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.707087 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:16Z","lastTransitionTime":"2025-10-08T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.809727 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.809756 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.809765 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.809779 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.809787 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:16Z","lastTransitionTime":"2025-10-08T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.912408 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.912451 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.912467 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.912483 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:16 crc kubenswrapper[4624]: I1008 14:24:16.912493 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:16Z","lastTransitionTime":"2025-10-08T14:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.014595 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.014648 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.014657 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.014671 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.014679 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:17Z","lastTransitionTime":"2025-10-08T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.117111 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.117185 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.117194 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.117210 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.117220 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:17Z","lastTransitionTime":"2025-10-08T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.220142 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.220194 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.220206 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.220222 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.220233 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:17Z","lastTransitionTime":"2025-10-08T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.322139 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.322173 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.322184 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.322216 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.322227 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:17Z","lastTransitionTime":"2025-10-08T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.424746 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.424822 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.424836 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.424849 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.424858 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:17Z","lastTransitionTime":"2025-10-08T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.465400 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.465465 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.465400 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:17 crc kubenswrapper[4624]: E1008 14:24:17.465534 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:17 crc kubenswrapper[4624]: E1008 14:24:17.465619 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:17 crc kubenswrapper[4624]: E1008 14:24:17.465765 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.527058 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.527088 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.527098 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.527116 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.527126 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:17Z","lastTransitionTime":"2025-10-08T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.629748 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.629812 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.629829 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.629848 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.629859 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:17Z","lastTransitionTime":"2025-10-08T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.731701 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.731735 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.731745 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.731760 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.731772 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:17Z","lastTransitionTime":"2025-10-08T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.833684 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.833710 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.833719 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.833733 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.833741 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:17Z","lastTransitionTime":"2025-10-08T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.935599 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.935971 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.935984 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.936002 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:17 crc kubenswrapper[4624]: I1008 14:24:17.936014 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:17Z","lastTransitionTime":"2025-10-08T14:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.038747 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.038779 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.038787 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.038799 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.038808 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:18Z","lastTransitionTime":"2025-10-08T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.141763 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.141805 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.141814 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.141829 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.141840 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:18Z","lastTransitionTime":"2025-10-08T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.244224 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.244282 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.244294 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.244311 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.244322 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:18Z","lastTransitionTime":"2025-10-08T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.346532 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.346581 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.346592 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.346606 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.346616 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:18Z","lastTransitionTime":"2025-10-08T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.449353 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.449384 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.449396 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.449412 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.449422 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:18Z","lastTransitionTime":"2025-10-08T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.465076 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:18 crc kubenswrapper[4624]: E1008 14:24:18.465182 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.551810 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.551849 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.551859 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.551876 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.551886 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:18Z","lastTransitionTime":"2025-10-08T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.654452 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.654500 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.654511 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.654526 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.654535 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:18Z","lastTransitionTime":"2025-10-08T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.756484 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.756524 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.756534 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.756549 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.756561 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:18Z","lastTransitionTime":"2025-10-08T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.859055 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.859086 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.859095 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.859130 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.859140 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:18Z","lastTransitionTime":"2025-10-08T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.961363 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.961395 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.961404 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.961417 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:18 crc kubenswrapper[4624]: I1008 14:24:18.961425 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:18Z","lastTransitionTime":"2025-10-08T14:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.064621 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.064679 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.064688 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.064709 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.064720 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:19Z","lastTransitionTime":"2025-10-08T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.166691 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.166769 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.166781 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.166797 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.166808 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:19Z","lastTransitionTime":"2025-10-08T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.203963 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.204135 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.204173 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:19 crc kubenswrapper[4624]: E1008 14:24:19.204233 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:23.204202502 +0000 UTC m=+148.355137579 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:24:19 crc kubenswrapper[4624]: E1008 14:24:19.204241 4624 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 14:24:19 crc kubenswrapper[4624]: E1008 14:24:19.204273 4624 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 14:24:19 crc kubenswrapper[4624]: E1008 14:24:19.204330 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 14:25:23.204313595 +0000 UTC m=+148.355248772 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 14:24:19 crc kubenswrapper[4624]: E1008 14:24:19.204354 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 14:25:23.204345126 +0000 UTC m=+148.355280293 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.268979 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.269019 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.269029 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.269045 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.269056 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:19Z","lastTransitionTime":"2025-10-08T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.304969 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.305036 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:19 crc kubenswrapper[4624]: E1008 14:24:19.305172 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 14:24:19 crc kubenswrapper[4624]: E1008 14:24:19.305190 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 14:24:19 crc kubenswrapper[4624]: E1008 14:24:19.305237 4624 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:24:19 crc kubenswrapper[4624]: E1008 14:24:19.305239 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 14:24:19 crc kubenswrapper[4624]: E1008 14:24:19.305270 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 14:24:19 crc kubenswrapper[4624]: E1008 14:24:19.305281 4624 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:24:19 crc kubenswrapper[4624]: E1008 14:24:19.305287 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 14:25:23.305273315 +0000 UTC m=+148.456208392 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:24:19 crc kubenswrapper[4624]: E1008 14:24:19.305331 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 14:25:23.305316436 +0000 UTC m=+148.456251513 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.371618 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.371670 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.371681 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.371697 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.371708 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:19Z","lastTransitionTime":"2025-10-08T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.465179 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.465249 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.465259 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:19 crc kubenswrapper[4624]: E1008 14:24:19.465326 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:19 crc kubenswrapper[4624]: E1008 14:24:19.465410 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:19 crc kubenswrapper[4624]: E1008 14:24:19.465599 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.474164 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.474201 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.474220 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.474237 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.474248 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:19Z","lastTransitionTime":"2025-10-08T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.577137 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.577178 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.577191 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.577209 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.577219 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:19Z","lastTransitionTime":"2025-10-08T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.679375 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.679585 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.679601 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.679618 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.679629 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:19Z","lastTransitionTime":"2025-10-08T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.781361 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.781434 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.781446 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.781463 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.781474 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:19Z","lastTransitionTime":"2025-10-08T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.883987 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.884016 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.884024 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.884039 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.884049 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:19Z","lastTransitionTime":"2025-10-08T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.986553 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.986600 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.986613 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.986628 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:19 crc kubenswrapper[4624]: I1008 14:24:19.986653 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:19Z","lastTransitionTime":"2025-10-08T14:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.089309 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.089448 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.089512 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.089572 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.089655 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:20Z","lastTransitionTime":"2025-10-08T14:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.191742 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.191775 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.191785 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.191798 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.191808 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:20Z","lastTransitionTime":"2025-10-08T14:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.294476 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.294520 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.294531 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.294548 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.294558 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:20Z","lastTransitionTime":"2025-10-08T14:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.397331 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.397376 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.397387 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.397403 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.397415 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:20Z","lastTransitionTime":"2025-10-08T14:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.465613 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.466369 4624 scope.go:117] "RemoveContainer" containerID="d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d" Oct 08 14:24:20 crc kubenswrapper[4624]: E1008 14:24:20.466465 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.498959 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.499198 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.499212 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.499368 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.499403 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:20Z","lastTransitionTime":"2025-10-08T14:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.602630 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.602905 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.602913 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.602926 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.602933 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:20Z","lastTransitionTime":"2025-10-08T14:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.705103 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.705140 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.705150 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.705164 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.705173 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:20Z","lastTransitionTime":"2025-10-08T14:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.807207 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.807243 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.807258 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.807274 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.807288 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:20Z","lastTransitionTime":"2025-10-08T14:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.903140 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovnkube-controller/2.log" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.905400 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerStarted","Data":"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c"} Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.906418 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.912623 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.912676 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.912687 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.912703 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.912718 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:20Z","lastTransitionTime":"2025-10-08T14:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.919218 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630954dc6b9710e09d5ba88bf292a65247c48ce5109d02f7dfb3c2f268b0cb24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.929644 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.943739 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.958549 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.970004 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.981161 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:20 crc kubenswrapper[4624]: I1008 14:24:20.991816 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:20Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.002713 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.015395 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.015450 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.015463 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.015478 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.015488 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:21Z","lastTransitionTime":"2025-10-08T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.025756 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f3de1c5bf27d90b3b9f4560a59ec4a6153df4d625d85bda278779415ae96343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:24:04Z\\\",\\\"message\\\":\\\"2025-10-08T14:23:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8c8c904-008d-47bb-b65b-6e2a0b3a6d8c\\\\n2025-10-08T14:23:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8c8c904-008d-47bb-b65b-6e2a0b3a6d8c to /host/opt/cni/bin/\\\\n2025-10-08T14:23:19Z [verbose] multus-daemon started\\\\n2025-10-08T14:23:19Z [verbose] Readiness Indicator file check\\\\n2025-10-08T14:24:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.042715 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:49Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1008 14:23:49.136225 6274 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.136588 6274 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.136858 6274 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.137282 6274 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 14:23:49.137333 6274 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 14:23:49.137399 6274 factory.go:656] Stopping watch factory\\\\nI1008 14:23:49.137419 6274 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 14:23:49.137429 6274 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 14:23:49.168490 6274 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1008 14:23:49.168518 6274 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1008 14:23:49.168570 6274 ovnkube.go:599] Stopped ovnkube\\\\nI1008 14:23:49.168596 6274 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 14:23:49.168689 6274 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.052866 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.065701 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.080879 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a8e0a93-2abf-4287-b5b4-14d480ab7808\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://920d41c8a17e549dbc9e474897693799091e5686977403a61bfef3b5373963ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66582c16b1a735d597ac77df3780cd199953116b97835aed248c275ff3eea23c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2ce3e7ea0415656d449e2f2b4677e2ff8654984ff2417e29fd6a240938eefc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.096016 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.110113 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.118009 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.118043 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.118052 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.118067 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.118078 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:21Z","lastTransitionTime":"2025-10-08T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.125665 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.138841 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8abf38af-8df3-49f9-9817-b4740d2a8b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrmz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.219975 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.220006 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.220023 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.220040 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.220050 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:21Z","lastTransitionTime":"2025-10-08T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.321802 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.321829 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.321839 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.321853 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.321862 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:21Z","lastTransitionTime":"2025-10-08T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.424490 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.424832 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.424851 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.424868 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.424880 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:21Z","lastTransitionTime":"2025-10-08T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.465105 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.465194 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:21 crc kubenswrapper[4624]: E1008 14:24:21.465263 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.465401 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:21 crc kubenswrapper[4624]: E1008 14:24:21.465474 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:21 crc kubenswrapper[4624]: E1008 14:24:21.465606 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.475021 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.526717 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.526753 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.526763 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.526778 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.526788 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:21Z","lastTransitionTime":"2025-10-08T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.629213 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.629249 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.629259 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.629274 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.629284 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:21Z","lastTransitionTime":"2025-10-08T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.731364 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.731402 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.731410 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.731424 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.731434 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:21Z","lastTransitionTime":"2025-10-08T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.833865 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.833897 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.833926 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.833941 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.833951 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:21Z","lastTransitionTime":"2025-10-08T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.909900 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovnkube-controller/3.log" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.910900 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovnkube-controller/2.log" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.913853 4624 generic.go:334] "Generic (PLEG): container finished" podID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerID="1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c" exitCode=1 Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.913941 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerDied","Data":"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c"} Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.913995 4624 scope.go:117] "RemoveContainer" containerID="d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.914652 4624 scope.go:117] "RemoveContainer" containerID="1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c" Oct 08 14:24:21 crc kubenswrapper[4624]: E1008 14:24:21.914805 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.926951 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630954dc6b9710e09d5ba88bf292a65247c48ce5109d02f7dfb3c2f268b0cb24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.935836 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.935902 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.935911 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.935926 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.935935 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:21Z","lastTransitionTime":"2025-10-08T14:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.938219 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.953075 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.964681 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.974057 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.986485 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:21 crc kubenswrapper[4624]: I1008 14:24:21.996763 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.006911 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.016914 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.028764 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f3de1c5bf27d90b3b9f4560a59ec4a6153df4d625d85bda278779415ae96343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:24:04Z\\\",\\\"message\\\":\\\"2025-10-08T14:23:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8c8c904-008d-47bb-b65b-6e2a0b3a6d8c\\\\n2025-10-08T14:23:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8c8c904-008d-47bb-b65b-6e2a0b3a6d8c to /host/opt/cni/bin/\\\\n2025-10-08T14:23:19Z [verbose] multus-daemon started\\\\n2025-10-08T14:23:19Z [verbose] Readiness Indicator file check\\\\n2025-10-08T14:24:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.037998 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.038033 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.038044 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.038059 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.038070 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:22Z","lastTransitionTime":"2025-10-08T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.045696 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57e9cbdb7f49099703c1b09ccbfdeae79efefcbadf4a5ddd5f5d3461bad4e1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:23:49Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1008 14:23:49.136225 6274 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.136588 6274 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.136858 6274 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 14:23:49.137282 6274 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1008 14:23:49.137333 6274 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 14:23:49.137399 6274 factory.go:656] Stopping watch factory\\\\nI1008 14:23:49.137419 6274 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 14:23:49.137429 6274 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 14:23:49.168490 6274 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1008 14:23:49.168518 6274 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1008 14:23:49.168570 6274 ovnkube.go:599] Stopped ovnkube\\\\nI1008 14:23:49.168596 6274 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 14:23:49.168689 6274 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:24:21Z\\\",\\\"message\\\":\\\"[]services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.246\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1008 14:24:21.186757 6709 services_controller.go:444] Built service openshift-network-console/networking-console-plugin LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF1008 14:24:21.186758 6709 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z]\\\\nI1008 14:24:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.054430 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.063267 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8abf38af-8df3-49f9-9817-b4740d2a8b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrmz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.072511 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eced2d2c-83c6-4d15-b03e-ddacf0e34e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4c296167c0ab7db52a78baf21298a70a2f9c3dfc146bdecc4bcc809abac7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27753b9440b751e4763b3f99292d88307f7794740ff8c352e86b9cac4d21d3ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27753b9440b751e4763b3f99292d88307f7794740ff8c352e86b9cac4d21d3ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.082024 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a8e0a93-2abf-4287-b5b4-14d480ab7808\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://920d41c8a17e549dbc9e474897693799091e5686977403a61bfef3b5373963ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66582c16b1a735d597ac77df3780cd199953116b97835aed248c275ff3eea23c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2ce3e7ea0415656d449e2f2b4677e2ff8654984ff2417e29fd6a240938eefc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.091373 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.101323 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.112026 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.139758 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.139788 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.139797 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.139814 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.139825 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:22Z","lastTransitionTime":"2025-10-08T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.241961 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.241992 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.242002 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.242017 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.242028 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:22Z","lastTransitionTime":"2025-10-08T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.344669 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.344704 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.344713 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.344727 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.344735 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:22Z","lastTransitionTime":"2025-10-08T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.447256 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.447304 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.447313 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.447327 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.447337 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:22Z","lastTransitionTime":"2025-10-08T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.465177 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:22 crc kubenswrapper[4624]: E1008 14:24:22.465483 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.550286 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.550324 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.550336 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.550353 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.550365 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:22Z","lastTransitionTime":"2025-10-08T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.652992 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.653031 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.653041 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.653057 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.653068 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:22Z","lastTransitionTime":"2025-10-08T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.755518 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.755555 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.755563 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.755578 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.755589 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:22Z","lastTransitionTime":"2025-10-08T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.857678 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.857711 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.857721 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.857736 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.857747 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:22Z","lastTransitionTime":"2025-10-08T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.918845 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovnkube-controller/3.log" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.922035 4624 scope.go:117] "RemoveContainer" containerID="1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c" Oct 08 14:24:22 crc kubenswrapper[4624]: E1008 14:24:22.922171 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.934509 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.947050 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630954dc6b9710e09d5ba88bf292a65247c48ce5109d02f7dfb3c2f268b0cb24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.956610 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.959776 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.959814 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.959825 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.959842 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.959852 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:22Z","lastTransitionTime":"2025-10-08T14:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.965222 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.975150 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.985780 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:22 crc kubenswrapper[4624]: I1008 14:24:22.996724 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f3de1c5bf27d90b3b9f4560a59ec4a6153df4d625d85bda278779415ae96343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:24:04Z\\\",\\\"message\\\":\\\"2025-10-08T14:23:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8c8c904-008d-47bb-b65b-6e2a0b3a6d8c\\\\n2025-10-08T14:23:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8c8c904-008d-47bb-b65b-6e2a0b3a6d8c to /host/opt/cni/bin/\\\\n2025-10-08T14:23:19Z [verbose] multus-daemon started\\\\n2025-10-08T14:23:19Z [verbose] Readiness Indicator file check\\\\n2025-10-08T14:24:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:22Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.011424 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:24:21Z\\\",\\\"message\\\":\\\"[]services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.246\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1008 14:24:21.186757 6709 services_controller.go:444] Built service openshift-network-console/networking-console-plugin LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF1008 14:24:21.186758 6709 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z]\\\\nI1008 14:24:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:24:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.020404 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.031328 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.041485 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.052417 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.061888 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.061922 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.061933 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.061948 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.061968 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:23Z","lastTransitionTime":"2025-10-08T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.063971 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.075593 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.085383 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.094210 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8abf38af-8df3-49f9-9817-b4740d2a8b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrmz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.103262 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eced2d2c-83c6-4d15-b03e-ddacf0e34e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4c296167c0ab7db52a78baf21298a70a2f9c3dfc146bdecc4bcc809abac7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27753b9440b751e4763b3f99292d88307f7794740ff8c352e86b9cac4d21d3ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27753b9440b751e4763b3f99292d88307f7794740ff8c352e86b9cac4d21d3ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.112836 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a8e0a93-2abf-4287-b5b4-14d480ab7808\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://920d41c8a17e549dbc9e474897693799091e5686977403a61bfef3b5373963ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66582c16b1a735d597ac77df3780cd199953116b97835aed248c275ff3eea23c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2ce3e7ea0415656d449e2f2b4677e2ff8654984ff2417e29fd6a240938eefc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.163800 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.163838 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.163849 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.163863 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.163873 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:23Z","lastTransitionTime":"2025-10-08T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.266272 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.266326 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.266338 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.266355 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.266367 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:23Z","lastTransitionTime":"2025-10-08T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.368344 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.368394 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.368404 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.368416 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.368425 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:23Z","lastTransitionTime":"2025-10-08T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.465505 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.465731 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.465713 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:23 crc kubenswrapper[4624]: E1008 14:24:23.465831 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:23 crc kubenswrapper[4624]: E1008 14:24:23.466038 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:23 crc kubenswrapper[4624]: E1008 14:24:23.466125 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.469812 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.469846 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.469855 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.469871 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.469884 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:23Z","lastTransitionTime":"2025-10-08T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.481248 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.571783 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.571867 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.571879 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.571895 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.571904 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:23Z","lastTransitionTime":"2025-10-08T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.673867 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.673903 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.673954 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.673971 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.673989 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:23Z","lastTransitionTime":"2025-10-08T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.711267 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.711304 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.711312 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.711324 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.711334 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:23Z","lastTransitionTime":"2025-10-08T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:23 crc kubenswrapper[4624]: E1008 14:24:23.721908 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.725077 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.725105 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.725113 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.725125 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.725134 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:23Z","lastTransitionTime":"2025-10-08T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:23 crc kubenswrapper[4624]: E1008 14:24:23.734920 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.737762 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.737784 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.737791 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.737804 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.737813 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:23Z","lastTransitionTime":"2025-10-08T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:23 crc kubenswrapper[4624]: E1008 14:24:23.748836 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.751737 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.751762 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.751770 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.751812 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.751823 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:23Z","lastTransitionTime":"2025-10-08T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:23 crc kubenswrapper[4624]: E1008 14:24:23.762014 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.764982 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.765021 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.765033 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.765048 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.765059 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:23Z","lastTransitionTime":"2025-10-08T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:23 crc kubenswrapper[4624]: E1008 14:24:23.775174 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e72be8b6-18a0-41a6-a9ba-9d43530841e9\\\",\\\"systemUUID\\\":\\\"d04b3def-39e5-42af-93c0-ddcd07e8aaf4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:23Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:23 crc kubenswrapper[4624]: E1008 14:24:23.775297 4624 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.776736 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.776780 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.776791 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.776807 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.776818 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:23Z","lastTransitionTime":"2025-10-08T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.879422 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.879455 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.879464 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.879479 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.879489 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:23Z","lastTransitionTime":"2025-10-08T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.981250 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.981299 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.981308 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.981320 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:23 crc kubenswrapper[4624]: I1008 14:24:23.981328 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:23Z","lastTransitionTime":"2025-10-08T14:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.083625 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.083675 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.083687 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.083701 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.083713 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:24Z","lastTransitionTime":"2025-10-08T14:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.185372 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.185409 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.185420 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.185438 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.185452 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:24Z","lastTransitionTime":"2025-10-08T14:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.287137 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.287173 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.287187 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.287207 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.287221 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:24Z","lastTransitionTime":"2025-10-08T14:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.388982 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.389022 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.389031 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.389044 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.389054 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:24Z","lastTransitionTime":"2025-10-08T14:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.465252 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:24 crc kubenswrapper[4624]: E1008 14:24:24.465406 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.492096 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.492124 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.492132 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.492145 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.492155 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:24Z","lastTransitionTime":"2025-10-08T14:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.594204 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.594232 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.594242 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.594258 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.594268 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:24Z","lastTransitionTime":"2025-10-08T14:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.696260 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.696297 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.696308 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.696324 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.696334 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:24Z","lastTransitionTime":"2025-10-08T14:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.798152 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.798215 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.798227 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.798244 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.798257 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:24Z","lastTransitionTime":"2025-10-08T14:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.900459 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.900516 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.900526 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.900541 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:24 crc kubenswrapper[4624]: I1008 14:24:24.900552 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:24Z","lastTransitionTime":"2025-10-08T14:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.003320 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.003359 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.003369 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.003386 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.003397 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:25Z","lastTransitionTime":"2025-10-08T14:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.107034 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.107069 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.107077 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.107092 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.107103 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:25Z","lastTransitionTime":"2025-10-08T14:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.209734 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.209789 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.209801 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.209822 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.209834 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:25Z","lastTransitionTime":"2025-10-08T14:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.312160 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.312234 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.312243 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.312256 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.312266 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:25Z","lastTransitionTime":"2025-10-08T14:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.419491 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.419540 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.419550 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.419566 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.419577 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:25Z","lastTransitionTime":"2025-10-08T14:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.465569 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.465579 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:25 crc kubenswrapper[4624]: E1008 14:24:25.465812 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.466012 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:25 crc kubenswrapper[4624]: E1008 14:24:25.466176 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:25 crc kubenswrapper[4624]: E1008 14:24:25.466205 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.477974 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71a24404-f993-478c-93b9-f507cf039957\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73dee617b94ebd47f94a884e6fb3c255e6a026f113dcdbc44f7b8a2b037fad4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3ab7aa62d6ca9aa9bccb478a225c4014ebecf35b8ef003208a6b912ed40e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c959cb466b75459bd579377fc672f2e48b174b66d404b26981ae9b0ae3bc56d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630954dc6b9710e09d5ba88bf292a65247c48ce5109d02f7dfb3c2f268b0cb24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://469a7809b0bde1488cb692f3d79ee986bbaa48fecc55a53bf8ad447b2306cf49\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"file observer\\\\nW1008 14:23:15.143459 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1008 14:23:15.143583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1008 14:23:15.144777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-883724404/tls.crt::/tmp/serving-cert-883724404/tls.key\\\\\\\"\\\\nI1008 14:23:15.355952 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1008 14:23:15.358270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1008 14:23:15.358289 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1008 14:23:15.358306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1008 14:23:15.358311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1008 14:23:15.365830 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1008 14:23:15.365861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1008 14:23:15.365872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1008 14:23:15.365877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 14:23:15.365888 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 14:23:15.365900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 14:23:15.366132 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1008 14:23:15.367138 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0d17bace2ac21cadd2807fac0324cf6f174c3d36a5830dd8ea60227fee61728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0f0e3b1c8edf6528c990926d312554df549bb40af1a25ecd981758a045bce5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.487532 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a106d69-d531-4ee4-a9ed-505988ebd24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48a71b24d242be36712b166fed6977b2bacfe7f5c9cfbbb15208b3cea7086d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fptm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zfrv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.500110 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e2555ab-0f5c-452a-a4c3-273f6a327e06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c193ec98e9cd35441e6b7398d8312e0d71d0d5c391379d106cf43b5e0fe4e3df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75655e6ff6967c01ea00bd582c54564981c4df158ae5e6cfe082299f984ba001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263b69a04a917e1f5e7ca1416c5c443ce222e4bfa0b748646c1ecce54ee5cdad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d7047d03af108925944de130ff1f928e595bb83eb9fc5ab06df1de79c2b5bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f4fa919c5ce977d31924b76360336115ddf5feda2e61af14d8c2d1b83ed8685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca4df7bf9c3e2b4103fe4a3f9338b2dfa1abeec7352e4394334789176bcac29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732c8a169b7d9c468aec6595b8a2a887aa48c93471354cf94afef1e65e161d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nr5w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gfq4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.517813 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9868db5c-38f8-4e7f-9e78-94b651d6388c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b958a067b9c705e07c70ac2747480305a5933d9d824a59390da41d9488b2e23c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ac387fbfba05fb1a980dfae189ec2b056670f47c1da3f1427577da56d82b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://332151d1c124ea8567578026dab3d484f0866133cba6537a923a2fb8ef6a07e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb5d516aecdb6ccb8628ddfd085c1040d4a298f1675df47e2582fb5139996e20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eed0232a0273f5326cd922fc000a06527761fc723442190c0e5acfc40bfb3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://843e099fe1ed03b431523e327013b3a50cb666b200623a9f03eb20df5a250136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://843e099fe1ed03b431523e327013b3a50cb666b200623a9f03eb20df5a250136\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7013313bd37d8a90c3cdcff6bbcc5c476c4ec68773670309a76c758f8adfbe36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7013313bd37d8a90c3cdcff6bbcc5c476c4ec68773670309a76c758f8adfbe36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://76289094a837baf714d3d99145d3ba4b0ddf22eee3aee73a0f0cbf8edbe91c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76289094a837baf714d3d99145d3ba4b0ddf22eee3aee73a0f0cbf8edbe91c3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.525738 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.525774 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.525784 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.525801 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.525811 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:25Z","lastTransitionTime":"2025-10-08T14:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.529431 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0425607d-daff-477f-a631-ea4382a9b232\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabb3f25374f6007567d9a60ab4d0822bc794d35f041224acfc052bacd8afafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://152099ed4ce638fdc2456017561a374fbf3df1a53a395dd5cd281157232009e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49d1e025890389350288f68982d6bb620d03414e08fe7afc64a8a23e6292d9a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2989ee58ce1b6cd813547468308af7b0a6cab8d7572a88cc771608a8f17b5654\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.537665 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q5jzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"571dc074-be0f-40e1-92cd-96c4d94d6359\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c47ce737c478d2a7c7a9c96f0b8964cbed1d56de9b4e256a114a6b13a3a431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rppg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q5jzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.547300 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.558031 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fd6413bc075f0bc21849d120bf73e154eaa3baeda9c3250c93bdab7fa46f49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca31e21e7422642d571491223a4f899b8d27eba51469a170eb7eea5fe2439b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.572803 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.584865 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-47hzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f3de1c5bf27d90b3b9f4560a59ec4a6153df4d625d85bda278779415ae96343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:24:04Z\\\",\\\"message\\\":\\\"2025-10-08T14:23:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8c8c904-008d-47bb-b65b-6e2a0b3a6d8c\\\\n2025-10-08T14:23:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8c8c904-008d-47bb-b65b-6e2a0b3a6d8c to /host/opt/cni/bin/\\\\n2025-10-08T14:23:19Z [verbose] multus-daemon started\\\\n2025-10-08T14:23:19Z [verbose] Readiness Indicator file check\\\\n2025-10-08T14:24:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xllt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-47hzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.600812 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aad1abb7-073f-4157-b39f-ddc71fbab31d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T14:24:21Z\\\",\\\"message\\\":\\\"[]services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.246\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1008 14:24:21.186757 6709 services_controller.go:444] Built service openshift-network-console/networking-console-plugin LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF1008 14:24:21.186758 6709 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:21Z is after 2025-08-24T17:21:41Z]\\\\nI1008 14:24:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T14:24:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:23:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kpbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbsj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.610221 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trdbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9151abb1-b674-4a91-8b8b-00b1cdbb5bf1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1cced66e37abd0b6e4cd930827784f90d008508b2f7d7202f058c880c028de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc5l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trdbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.619963 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5306007c-dc93-41eb-8623-19adb6234d92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebd1c917e1220c97333e5377ef7dcee55acdf0a9db894eb0ed5dc5705075ceff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dc0ebdb4e16b2f9422955ffcb905fb339b1693769f313282484c51cbbbf4f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gjdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7mdh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.627645 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.627697 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.627707 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.627721 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.627730 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:25Z","lastTransitionTime":"2025-10-08T14:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.629115 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eced2d2c-83c6-4d15-b03e-ddacf0e34e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4c296167c0ab7db52a78baf21298a70a2f9c3dfc146bdecc4bcc809abac7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27753b9440b751e4763b3f99292d88307f7794740ff8c352e86b9cac4d21d3ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27753b9440b751e4763b3f99292d88307f7794740ff8c352e86b9cac4d21d3ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.638780 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a8e0a93-2abf-4287-b5b4-14d480ab7808\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://920d41c8a17e549dbc9e474897693799091e5686977403a61bfef3b5373963ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66582c16b1a735d597ac77df3780cd199953116b97835aed248c275ff3eea23c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2ce3e7ea0415656d449e2f2b4677e2ff8654984ff2417e29fd6a240938eefc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18f20293d2a34ae4bcd502ed247a5976d4b1ac16b3c9447368b63fb4ef04634d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T14:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T14:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:22:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.649011 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c20f9c9157ee7ba5d9cf8aaf7d3d3fff49b37d1e18fa85176259b5406bb97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.659601 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5027d09c066d2bba277215eb8f87652782db0032627ddb3657559e60c7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T14:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.669949 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.679447 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8abf38af-8df3-49f9-9817-b4740d2a8b4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T14:23:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqr9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T14:23:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qrmz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T14:24:25Z is after 2025-08-24T17:21:41Z" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.730131 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.730454 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.730546 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.730633 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.730742 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:25Z","lastTransitionTime":"2025-10-08T14:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.833139 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.833176 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.833185 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.833199 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.833208 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:25Z","lastTransitionTime":"2025-10-08T14:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.934660 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.934748 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.934757 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.934771 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:25 crc kubenswrapper[4624]: I1008 14:24:25.934783 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:25Z","lastTransitionTime":"2025-10-08T14:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.037367 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.037393 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.037408 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.037425 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.037438 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:26Z","lastTransitionTime":"2025-10-08T14:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.141272 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.141309 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.141320 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.141336 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.141348 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:26Z","lastTransitionTime":"2025-10-08T14:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.242803 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.242841 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.242852 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.242867 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.242877 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:26Z","lastTransitionTime":"2025-10-08T14:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.346018 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.346042 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.346050 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.346063 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.346071 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:26Z","lastTransitionTime":"2025-10-08T14:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.448460 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.448506 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.448518 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.448534 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.448547 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:26Z","lastTransitionTime":"2025-10-08T14:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.465622 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:26 crc kubenswrapper[4624]: E1008 14:24:26.465801 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.551000 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.551029 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.551040 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.551056 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.551068 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:26Z","lastTransitionTime":"2025-10-08T14:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.653510 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.653559 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.653568 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.653583 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.653592 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:26Z","lastTransitionTime":"2025-10-08T14:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.756533 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.756973 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.757115 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.757183 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.757253 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:26Z","lastTransitionTime":"2025-10-08T14:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.860333 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.860571 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.860704 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.860813 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.860878 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:26Z","lastTransitionTime":"2025-10-08T14:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.963700 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.964110 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.964214 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.964351 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:26 crc kubenswrapper[4624]: I1008 14:24:26.964434 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:26Z","lastTransitionTime":"2025-10-08T14:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.066957 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.067292 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.067366 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.067432 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.067488 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:27Z","lastTransitionTime":"2025-10-08T14:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.170162 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.170201 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.170213 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.170230 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.170241 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:27Z","lastTransitionTime":"2025-10-08T14:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.272536 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.272573 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.272583 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.272599 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.272610 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:27Z","lastTransitionTime":"2025-10-08T14:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.374701 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.374733 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.374741 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.374754 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.374765 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:27Z","lastTransitionTime":"2025-10-08T14:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.465508 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.465687 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:27 crc kubenswrapper[4624]: E1008 14:24:27.465788 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.465686 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:27 crc kubenswrapper[4624]: E1008 14:24:27.466105 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:27 crc kubenswrapper[4624]: E1008 14:24:27.466245 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.476552 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.476576 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.476584 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.476595 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.476603 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:27Z","lastTransitionTime":"2025-10-08T14:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.579024 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.579059 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.579068 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.579082 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.579092 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:27Z","lastTransitionTime":"2025-10-08T14:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.681450 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.681485 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.681495 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.681509 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.681519 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:27Z","lastTransitionTime":"2025-10-08T14:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.783689 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.783732 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.783742 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.783755 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.783769 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:27Z","lastTransitionTime":"2025-10-08T14:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.885669 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.885707 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.885716 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.885728 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.885736 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:27Z","lastTransitionTime":"2025-10-08T14:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.987743 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.987790 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.987802 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.987816 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:27 crc kubenswrapper[4624]: I1008 14:24:27.987826 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:27Z","lastTransitionTime":"2025-10-08T14:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.090229 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.090261 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.090270 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.090287 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.090295 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:28Z","lastTransitionTime":"2025-10-08T14:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.192779 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.192804 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.192811 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.192824 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.192833 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:28Z","lastTransitionTime":"2025-10-08T14:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.294495 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.294725 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.294790 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.294881 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.294946 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:28Z","lastTransitionTime":"2025-10-08T14:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.396547 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.396615 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.396630 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.396683 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.396698 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:28Z","lastTransitionTime":"2025-10-08T14:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.465233 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:28 crc kubenswrapper[4624]: E1008 14:24:28.465380 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.499030 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.499322 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.499447 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.499575 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.499740 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:28Z","lastTransitionTime":"2025-10-08T14:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.601748 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.602182 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.602301 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.602415 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.602532 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:28Z","lastTransitionTime":"2025-10-08T14:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.704473 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.704524 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.704535 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.704548 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.704558 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:28Z","lastTransitionTime":"2025-10-08T14:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.806121 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.806183 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.806198 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.806217 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.806228 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:28Z","lastTransitionTime":"2025-10-08T14:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.913439 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.913480 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.913491 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.913507 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:28 crc kubenswrapper[4624]: I1008 14:24:28.913518 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:28Z","lastTransitionTime":"2025-10-08T14:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.015261 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.015292 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.015301 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.015317 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.015327 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:29Z","lastTransitionTime":"2025-10-08T14:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.117674 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.117703 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.117712 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.117727 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.117737 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:29Z","lastTransitionTime":"2025-10-08T14:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.219987 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.220020 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.220028 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.220040 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.220049 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:29Z","lastTransitionTime":"2025-10-08T14:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.322718 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.322755 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.322764 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.322777 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.322786 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:29Z","lastTransitionTime":"2025-10-08T14:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.425149 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.425195 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.425205 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.425220 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.425229 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:29Z","lastTransitionTime":"2025-10-08T14:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.465026 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.465028 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:29 crc kubenswrapper[4624]: E1008 14:24:29.465183 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:29 crc kubenswrapper[4624]: E1008 14:24:29.465276 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.465036 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:29 crc kubenswrapper[4624]: E1008 14:24:29.465383 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.527931 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.527968 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.527980 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.527995 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.528008 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:29Z","lastTransitionTime":"2025-10-08T14:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.630417 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.630459 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.630473 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.630496 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.630510 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:29Z","lastTransitionTime":"2025-10-08T14:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.732459 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.732487 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.732494 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.732507 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.732515 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:29Z","lastTransitionTime":"2025-10-08T14:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.835047 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.835097 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.835109 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.835126 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.835138 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:29Z","lastTransitionTime":"2025-10-08T14:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.936963 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.937024 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.937037 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.937079 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:29 crc kubenswrapper[4624]: I1008 14:24:29.937090 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:29Z","lastTransitionTime":"2025-10-08T14:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.039092 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.039123 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.039132 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.039164 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.039173 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:30Z","lastTransitionTime":"2025-10-08T14:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.141967 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.141998 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.142007 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.142020 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.142028 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:30Z","lastTransitionTime":"2025-10-08T14:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.244888 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.244920 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.244930 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.244948 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.244959 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:30Z","lastTransitionTime":"2025-10-08T14:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.347928 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.347964 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.347974 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.347991 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.348002 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:30Z","lastTransitionTime":"2025-10-08T14:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.450028 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.450060 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.450069 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.450082 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.450091 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:30Z","lastTransitionTime":"2025-10-08T14:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.465464 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:30 crc kubenswrapper[4624]: E1008 14:24:30.465604 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.551947 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.552001 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.552011 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.552026 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.552035 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:30Z","lastTransitionTime":"2025-10-08T14:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.655408 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.655447 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.655456 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.655470 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.655481 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:30Z","lastTransitionTime":"2025-10-08T14:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.757911 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.757952 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.757962 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.757976 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.757985 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:30Z","lastTransitionTime":"2025-10-08T14:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.859716 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.859791 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.859802 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.859816 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.859828 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:30Z","lastTransitionTime":"2025-10-08T14:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.962070 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.962388 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.962408 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.962427 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:30 crc kubenswrapper[4624]: I1008 14:24:30.962439 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:30Z","lastTransitionTime":"2025-10-08T14:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.064804 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.064841 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.064849 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.064862 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.064872 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:31Z","lastTransitionTime":"2025-10-08T14:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.166685 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.166725 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.166734 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.166749 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.166767 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:31Z","lastTransitionTime":"2025-10-08T14:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.269160 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.269197 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.269206 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.269220 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.269229 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:31Z","lastTransitionTime":"2025-10-08T14:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.370886 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.370918 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.370926 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.370939 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.370950 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:31Z","lastTransitionTime":"2025-10-08T14:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.464839 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.464891 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.464890 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:31 crc kubenswrapper[4624]: E1008 14:24:31.465055 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:31 crc kubenswrapper[4624]: E1008 14:24:31.465097 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:31 crc kubenswrapper[4624]: E1008 14:24:31.465178 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.472671 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.472721 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.472729 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.472739 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.472747 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:31Z","lastTransitionTime":"2025-10-08T14:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.575378 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.575434 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.575445 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.575461 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.575474 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:31Z","lastTransitionTime":"2025-10-08T14:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.680656 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.680773 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.680784 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.680806 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.680822 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:31Z","lastTransitionTime":"2025-10-08T14:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.784132 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.784210 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.784225 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.784269 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.784284 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:31Z","lastTransitionTime":"2025-10-08T14:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.887398 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.887443 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.887457 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.887473 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.887486 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:31Z","lastTransitionTime":"2025-10-08T14:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.989568 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.989608 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.989619 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.989647 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:31 crc kubenswrapper[4624]: I1008 14:24:31.989657 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:31Z","lastTransitionTime":"2025-10-08T14:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.091514 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.091545 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.091553 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.091564 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.091572 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:32Z","lastTransitionTime":"2025-10-08T14:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.194059 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.194116 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.194126 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.194141 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.194153 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:32Z","lastTransitionTime":"2025-10-08T14:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.296508 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.296587 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.296599 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.296614 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.296623 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:32Z","lastTransitionTime":"2025-10-08T14:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.399031 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.399064 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.399105 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.399117 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.399126 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:32Z","lastTransitionTime":"2025-10-08T14:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.465136 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:32 crc kubenswrapper[4624]: E1008 14:24:32.465323 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.501829 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.501867 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.501875 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.501889 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.501898 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:32Z","lastTransitionTime":"2025-10-08T14:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.604407 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.604442 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.604450 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.604462 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.604471 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:32Z","lastTransitionTime":"2025-10-08T14:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.706600 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.706688 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.706700 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.706715 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.706723 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:32Z","lastTransitionTime":"2025-10-08T14:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.809174 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.809226 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.809239 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.809257 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.809267 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:32Z","lastTransitionTime":"2025-10-08T14:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.911445 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.911484 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.911493 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.911512 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:32 crc kubenswrapper[4624]: I1008 14:24:32.911524 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:32Z","lastTransitionTime":"2025-10-08T14:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.014874 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.014900 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.014908 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.014929 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.014938 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:33Z","lastTransitionTime":"2025-10-08T14:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.118033 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.118067 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.118081 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.118096 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.118105 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:33Z","lastTransitionTime":"2025-10-08T14:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.220828 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.220854 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.220865 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.220879 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.220893 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:33Z","lastTransitionTime":"2025-10-08T14:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.323253 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.323291 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.323311 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.323330 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.323343 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:33Z","lastTransitionTime":"2025-10-08T14:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.425922 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.426219 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.426314 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.426399 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.426479 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:33Z","lastTransitionTime":"2025-10-08T14:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.465498 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.465556 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.465746 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:33 crc kubenswrapper[4624]: E1008 14:24:33.465873 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:33 crc kubenswrapper[4624]: E1008 14:24:33.465951 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:33 crc kubenswrapper[4624]: E1008 14:24:33.466052 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.528914 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.528945 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.528954 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.528968 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.528978 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:33Z","lastTransitionTime":"2025-10-08T14:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.631080 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.631114 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.631124 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.631139 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.631149 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:33Z","lastTransitionTime":"2025-10-08T14:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.733974 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.734018 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.734030 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.734050 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.734060 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:33Z","lastTransitionTime":"2025-10-08T14:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.836018 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.836551 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.836685 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.836860 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.836945 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:33Z","lastTransitionTime":"2025-10-08T14:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.939430 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.939487 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.939501 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.939519 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:33 crc kubenswrapper[4624]: I1008 14:24:33.939530 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:33Z","lastTransitionTime":"2025-10-08T14:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.041807 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.042133 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.042337 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.042417 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.042481 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:34Z","lastTransitionTime":"2025-10-08T14:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.103919 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.104019 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.104040 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.104073 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.104129 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T14:24:34Z","lastTransitionTime":"2025-10-08T14:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.144531 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs\") pod \"network-metrics-daemon-qrmz6\" (UID: \"8abf38af-8df3-49f9-9817-b4740d2a8b4a\") " pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:34 crc kubenswrapper[4624]: E1008 14:24:34.144738 4624 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 14:24:34 crc kubenswrapper[4624]: E1008 14:24:34.144777 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs podName:8abf38af-8df3-49f9-9817-b4740d2a8b4a nodeName:}" failed. No retries permitted until 2025-10-08 14:25:38.144763861 +0000 UTC m=+163.295698938 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs") pod "network-metrics-daemon-qrmz6" (UID: "8abf38af-8df3-49f9-9817-b4740d2a8b4a") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.146030 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc"] Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.147662 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.150152 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.152329 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.152681 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.152899 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.162350 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-q5jzf" podStartSLOduration=79.162333323 podStartE2EDuration="1m19.162333323s" podCreationTimestamp="2025-10-08 14:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:24:34.162293692 +0000 UTC m=+99.313228769" watchObservedRunningTime="2025-10-08 14:24:34.162333323 +0000 UTC m=+99.313268400" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.199541 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=11.199522801 podStartE2EDuration="11.199522801s" podCreationTimestamp="2025-10-08 14:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:24:34.198943346 +0000 UTC m=+99.349878483" watchObservedRunningTime="2025-10-08 14:24:34.199522801 +0000 UTC m=+99.350457878" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.217743 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=74.217721619 podStartE2EDuration="1m14.217721619s" podCreationTimestamp="2025-10-08 14:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:24:34.216171909 +0000 UTC m=+99.367107006" watchObservedRunningTime="2025-10-08 14:24:34.217721619 +0000 UTC m=+99.368656696" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.245075 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cb69d534-ab87-4f09-9113-f407020c6656-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-776kc\" (UID: \"cb69d534-ab87-4f09-9113-f407020c6656\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.245161 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cb69d534-ab87-4f09-9113-f407020c6656-service-ca\") pod \"cluster-version-operator-5c965bbfc6-776kc\" (UID: \"cb69d534-ab87-4f09-9113-f407020c6656\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.245194 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb69d534-ab87-4f09-9113-f407020c6656-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-776kc\" (UID: \"cb69d534-ab87-4f09-9113-f407020c6656\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.245222 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb69d534-ab87-4f09-9113-f407020c6656-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-776kc\" (UID: \"cb69d534-ab87-4f09-9113-f407020c6656\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.245243 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cb69d534-ab87-4f09-9113-f407020c6656-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-776kc\" (UID: \"cb69d534-ab87-4f09-9113-f407020c6656\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.257451 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-47hzf" podStartSLOduration=78.257429972 podStartE2EDuration="1m18.257429972s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:24:34.230409406 +0000 UTC m=+99.381344483" watchObservedRunningTime="2025-10-08 14:24:34.257429972 +0000 UTC m=+99.408365049" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.287453 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-trdbp" podStartSLOduration=78.287434494 podStartE2EDuration="1m18.287434494s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:24:34.275966329 +0000 UTC m=+99.426901406" watchObservedRunningTime="2025-10-08 14:24:34.287434494 +0000 UTC m=+99.438369571" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.302354 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7mdh5" podStartSLOduration=78.302330668 podStartE2EDuration="1m18.302330668s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:24:34.287829585 +0000 UTC m=+99.438764662" watchObservedRunningTime="2025-10-08 14:24:34.302330668 +0000 UTC m=+99.453265745" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.345932 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cb69d534-ab87-4f09-9113-f407020c6656-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-776kc\" (UID: \"cb69d534-ab87-4f09-9113-f407020c6656\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.346013 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cb69d534-ab87-4f09-9113-f407020c6656-service-ca\") pod \"cluster-version-operator-5c965bbfc6-776kc\" (UID: \"cb69d534-ab87-4f09-9113-f407020c6656\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.346042 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb69d534-ab87-4f09-9113-f407020c6656-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-776kc\" (UID: \"cb69d534-ab87-4f09-9113-f407020c6656\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.346077 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb69d534-ab87-4f09-9113-f407020c6656-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-776kc\" (UID: \"cb69d534-ab87-4f09-9113-f407020c6656\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.346101 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cb69d534-ab87-4f09-9113-f407020c6656-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-776kc\" (UID: \"cb69d534-ab87-4f09-9113-f407020c6656\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.346178 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cb69d534-ab87-4f09-9113-f407020c6656-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-776kc\" (UID: \"cb69d534-ab87-4f09-9113-f407020c6656\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.346222 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cb69d534-ab87-4f09-9113-f407020c6656-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-776kc\" (UID: \"cb69d534-ab87-4f09-9113-f407020c6656\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.347111 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cb69d534-ab87-4f09-9113-f407020c6656-service-ca\") pod \"cluster-version-operator-5c965bbfc6-776kc\" (UID: \"cb69d534-ab87-4f09-9113-f407020c6656\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.352792 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb69d534-ab87-4f09-9113-f407020c6656-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-776kc\" (UID: \"cb69d534-ab87-4f09-9113-f407020c6656\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.371964 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb69d534-ab87-4f09-9113-f407020c6656-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-776kc\" (UID: \"cb69d534-ab87-4f09-9113-f407020c6656\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.407071 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=13.407051755 podStartE2EDuration="13.407051755s" podCreationTimestamp="2025-10-08 14:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:24:34.39403419 +0000 UTC m=+99.544969277" watchObservedRunningTime="2025-10-08 14:24:34.407051755 +0000 UTC m=+99.557986832" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.425117 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.42509999 podStartE2EDuration="49.42509999s" podCreationTimestamp="2025-10-08 14:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:24:34.407005183 +0000 UTC m=+99.557940260" watchObservedRunningTime="2025-10-08 14:24:34.42509999 +0000 UTC m=+99.576035067" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.425424 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gfq4z" podStartSLOduration=78.425421298 podStartE2EDuration="1m18.425421298s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:24:34.423769645 +0000 UTC m=+99.574704722" watchObservedRunningTime="2025-10-08 14:24:34.425421298 +0000 UTC m=+99.576356365" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.442243 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.44222567 podStartE2EDuration="1m19.44222567s" podCreationTimestamp="2025-10-08 14:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:24:34.441507492 +0000 UTC m=+99.592442569" watchObservedRunningTime="2025-10-08 14:24:34.44222567 +0000 UTC m=+99.593160747" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.451909 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podStartSLOduration=78.451888149 podStartE2EDuration="1m18.451888149s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:24:34.451174961 +0000 UTC m=+99.602110048" watchObservedRunningTime="2025-10-08 14:24:34.451888149 +0000 UTC m=+99.602823226" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.463718 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.466167 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:34 crc kubenswrapper[4624]: E1008 14:24:34.466408 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:34 crc kubenswrapper[4624]: W1008 14:24:34.480217 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb69d534_ab87_4f09_9113_f407020c6656.slice/crio-83c0940b27c39362e7a44f9f6460cee7f7419fa71a9e8cc71f30b3a1e0649bd4 WatchSource:0}: Error finding container 83c0940b27c39362e7a44f9f6460cee7f7419fa71a9e8cc71f30b3a1e0649bd4: Status 404 returned error can't find the container with id 83c0940b27c39362e7a44f9f6460cee7f7419fa71a9e8cc71f30b3a1e0649bd4 Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.953934 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" event={"ID":"cb69d534-ab87-4f09-9113-f407020c6656","Type":"ContainerStarted","Data":"5d031f49d566c92838d7d155f358fac91bc051f6af3517f8dbb8f7ef86d01b95"} Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.953982 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" event={"ID":"cb69d534-ab87-4f09-9113-f407020c6656","Type":"ContainerStarted","Data":"83c0940b27c39362e7a44f9f6460cee7f7419fa71a9e8cc71f30b3a1e0649bd4"} Oct 08 14:24:34 crc kubenswrapper[4624]: I1008 14:24:34.967931 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-776kc" podStartSLOduration=78.967904357 podStartE2EDuration="1m18.967904357s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:24:34.967340433 +0000 UTC m=+100.118275510" watchObservedRunningTime="2025-10-08 14:24:34.967904357 +0000 UTC m=+100.118839434" Oct 08 14:24:35 crc kubenswrapper[4624]: I1008 14:24:35.464956 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:35 crc kubenswrapper[4624]: I1008 14:24:35.464958 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:35 crc kubenswrapper[4624]: I1008 14:24:35.465965 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:35 crc kubenswrapper[4624]: E1008 14:24:35.465953 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:35 crc kubenswrapper[4624]: E1008 14:24:35.466083 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:35 crc kubenswrapper[4624]: E1008 14:24:35.466159 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:36 crc kubenswrapper[4624]: I1008 14:24:36.464719 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:36 crc kubenswrapper[4624]: E1008 14:24:36.464898 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:37 crc kubenswrapper[4624]: I1008 14:24:37.465550 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:37 crc kubenswrapper[4624]: E1008 14:24:37.465681 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:37 crc kubenswrapper[4624]: I1008 14:24:37.465564 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:37 crc kubenswrapper[4624]: E1008 14:24:37.465789 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:37 crc kubenswrapper[4624]: I1008 14:24:37.465789 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:37 crc kubenswrapper[4624]: E1008 14:24:37.465862 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:38 crc kubenswrapper[4624]: I1008 14:24:38.464919 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:38 crc kubenswrapper[4624]: E1008 14:24:38.465469 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:38 crc kubenswrapper[4624]: I1008 14:24:38.466187 4624 scope.go:117] "RemoveContainer" containerID="1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c" Oct 08 14:24:38 crc kubenswrapper[4624]: E1008 14:24:38.466378 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" Oct 08 14:24:39 crc kubenswrapper[4624]: I1008 14:24:39.465503 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:39 crc kubenswrapper[4624]: I1008 14:24:39.465542 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:39 crc kubenswrapper[4624]: I1008 14:24:39.465503 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:39 crc kubenswrapper[4624]: E1008 14:24:39.465623 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:39 crc kubenswrapper[4624]: E1008 14:24:39.465765 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:39 crc kubenswrapper[4624]: E1008 14:24:39.465827 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:40 crc kubenswrapper[4624]: I1008 14:24:40.465690 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:40 crc kubenswrapper[4624]: E1008 14:24:40.466409 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:41 crc kubenswrapper[4624]: I1008 14:24:41.464856 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:41 crc kubenswrapper[4624]: I1008 14:24:41.464946 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:41 crc kubenswrapper[4624]: E1008 14:24:41.464989 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:41 crc kubenswrapper[4624]: I1008 14:24:41.464856 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:41 crc kubenswrapper[4624]: E1008 14:24:41.465079 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:41 crc kubenswrapper[4624]: E1008 14:24:41.465179 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:42 crc kubenswrapper[4624]: I1008 14:24:42.464822 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:42 crc kubenswrapper[4624]: E1008 14:24:42.464993 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:43 crc kubenswrapper[4624]: I1008 14:24:43.465221 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:43 crc kubenswrapper[4624]: I1008 14:24:43.465267 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:43 crc kubenswrapper[4624]: I1008 14:24:43.465227 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:43 crc kubenswrapper[4624]: E1008 14:24:43.465366 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:43 crc kubenswrapper[4624]: E1008 14:24:43.465430 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:43 crc kubenswrapper[4624]: E1008 14:24:43.465492 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:44 crc kubenswrapper[4624]: I1008 14:24:44.465139 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:44 crc kubenswrapper[4624]: E1008 14:24:44.465336 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:45 crc kubenswrapper[4624]: I1008 14:24:45.464708 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:45 crc kubenswrapper[4624]: I1008 14:24:45.464783 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:45 crc kubenswrapper[4624]: E1008 14:24:45.465745 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:45 crc kubenswrapper[4624]: E1008 14:24:45.466235 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:45 crc kubenswrapper[4624]: I1008 14:24:45.466353 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:45 crc kubenswrapper[4624]: E1008 14:24:45.466483 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:46 crc kubenswrapper[4624]: I1008 14:24:46.465572 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:46 crc kubenswrapper[4624]: E1008 14:24:46.465729 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:47 crc kubenswrapper[4624]: I1008 14:24:47.465842 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:47 crc kubenswrapper[4624]: I1008 14:24:47.465947 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:47 crc kubenswrapper[4624]: I1008 14:24:47.465868 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:47 crc kubenswrapper[4624]: E1008 14:24:47.466075 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:47 crc kubenswrapper[4624]: E1008 14:24:47.466226 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:47 crc kubenswrapper[4624]: E1008 14:24:47.466370 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:48 crc kubenswrapper[4624]: I1008 14:24:48.465033 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:48 crc kubenswrapper[4624]: E1008 14:24:48.465287 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:49 crc kubenswrapper[4624]: I1008 14:24:49.465675 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:49 crc kubenswrapper[4624]: I1008 14:24:49.465826 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:49 crc kubenswrapper[4624]: I1008 14:24:49.465872 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:49 crc kubenswrapper[4624]: E1008 14:24:49.465936 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:49 crc kubenswrapper[4624]: E1008 14:24:49.465974 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:49 crc kubenswrapper[4624]: E1008 14:24:49.466072 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:50 crc kubenswrapper[4624]: I1008 14:24:50.465566 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:50 crc kubenswrapper[4624]: E1008 14:24:50.465925 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:50 crc kubenswrapper[4624]: I1008 14:24:50.466167 4624 scope.go:117] "RemoveContainer" containerID="1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c" Oct 08 14:24:50 crc kubenswrapper[4624]: E1008 14:24:50.466300 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jbsj6_openshift-ovn-kubernetes(aad1abb7-073f-4157-b39f-ddc71fbab31d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" Oct 08 14:24:50 crc kubenswrapper[4624]: I1008 14:24:50.997484 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-47hzf_48aee8dd-6063-4d3c-b65a-f37ce1ccdb82/kube-multus/1.log" Oct 08 14:24:50 crc kubenswrapper[4624]: I1008 14:24:50.998139 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-47hzf_48aee8dd-6063-4d3c-b65a-f37ce1ccdb82/kube-multus/0.log" Oct 08 14:24:50 crc kubenswrapper[4624]: I1008 14:24:50.998169 4624 generic.go:334] "Generic (PLEG): container finished" podID="48aee8dd-6063-4d3c-b65a-f37ce1ccdb82" containerID="1f3de1c5bf27d90b3b9f4560a59ec4a6153df4d625d85bda278779415ae96343" exitCode=1 Oct 08 14:24:50 crc kubenswrapper[4624]: I1008 14:24:50.998197 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-47hzf" event={"ID":"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82","Type":"ContainerDied","Data":"1f3de1c5bf27d90b3b9f4560a59ec4a6153df4d625d85bda278779415ae96343"} Oct 08 14:24:50 crc kubenswrapper[4624]: I1008 14:24:50.998230 4624 scope.go:117] "RemoveContainer" containerID="c2fe3272c927481605c3ea1b415614ba55256ae9ed4d2d4143eb35eb3648b56b" Oct 08 14:24:50 crc kubenswrapper[4624]: I1008 14:24:50.998558 4624 scope.go:117] "RemoveContainer" containerID="1f3de1c5bf27d90b3b9f4560a59ec4a6153df4d625d85bda278779415ae96343" Oct 08 14:24:50 crc kubenswrapper[4624]: E1008 14:24:50.998761 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-47hzf_openshift-multus(48aee8dd-6063-4d3c-b65a-f37ce1ccdb82)\"" pod="openshift-multus/multus-47hzf" podUID="48aee8dd-6063-4d3c-b65a-f37ce1ccdb82" Oct 08 14:24:51 crc kubenswrapper[4624]: I1008 14:24:51.465143 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:51 crc kubenswrapper[4624]: I1008 14:24:51.465252 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:51 crc kubenswrapper[4624]: E1008 14:24:51.465298 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:51 crc kubenswrapper[4624]: E1008 14:24:51.465399 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:51 crc kubenswrapper[4624]: I1008 14:24:51.465450 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:51 crc kubenswrapper[4624]: E1008 14:24:51.465611 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:52 crc kubenswrapper[4624]: I1008 14:24:52.002554 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-47hzf_48aee8dd-6063-4d3c-b65a-f37ce1ccdb82/kube-multus/1.log" Oct 08 14:24:52 crc kubenswrapper[4624]: I1008 14:24:52.465587 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:52 crc kubenswrapper[4624]: E1008 14:24:52.465779 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:53 crc kubenswrapper[4624]: I1008 14:24:53.465064 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:53 crc kubenswrapper[4624]: E1008 14:24:53.465182 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:53 crc kubenswrapper[4624]: I1008 14:24:53.465067 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:53 crc kubenswrapper[4624]: I1008 14:24:53.465018 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:53 crc kubenswrapper[4624]: E1008 14:24:53.465346 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:53 crc kubenswrapper[4624]: E1008 14:24:53.465388 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:54 crc kubenswrapper[4624]: I1008 14:24:54.464683 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:54 crc kubenswrapper[4624]: E1008 14:24:54.465063 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:55 crc kubenswrapper[4624]: I1008 14:24:55.464969 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:55 crc kubenswrapper[4624]: E1008 14:24:55.466487 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:55 crc kubenswrapper[4624]: I1008 14:24:55.466513 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:55 crc kubenswrapper[4624]: I1008 14:24:55.466531 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:55 crc kubenswrapper[4624]: E1008 14:24:55.466797 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:55 crc kubenswrapper[4624]: E1008 14:24:55.466897 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:55 crc kubenswrapper[4624]: E1008 14:24:55.485126 4624 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Oct 08 14:24:55 crc kubenswrapper[4624]: E1008 14:24:55.565660 4624 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 08 14:24:56 crc kubenswrapper[4624]: I1008 14:24:56.465346 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:56 crc kubenswrapper[4624]: E1008 14:24:56.465576 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:57 crc kubenswrapper[4624]: I1008 14:24:57.465163 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:57 crc kubenswrapper[4624]: I1008 14:24:57.465219 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:57 crc kubenswrapper[4624]: E1008 14:24:57.465434 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:57 crc kubenswrapper[4624]: E1008 14:24:57.465701 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:57 crc kubenswrapper[4624]: I1008 14:24:57.465997 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:57 crc kubenswrapper[4624]: E1008 14:24:57.466170 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:24:58 crc kubenswrapper[4624]: I1008 14:24:58.465608 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:24:58 crc kubenswrapper[4624]: E1008 14:24:58.465789 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:24:59 crc kubenswrapper[4624]: I1008 14:24:59.465213 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:24:59 crc kubenswrapper[4624]: I1008 14:24:59.465223 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:24:59 crc kubenswrapper[4624]: E1008 14:24:59.465349 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:24:59 crc kubenswrapper[4624]: E1008 14:24:59.465399 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:24:59 crc kubenswrapper[4624]: I1008 14:24:59.465236 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:24:59 crc kubenswrapper[4624]: E1008 14:24:59.465554 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:25:00 crc kubenswrapper[4624]: I1008 14:25:00.465700 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:25:00 crc kubenswrapper[4624]: E1008 14:25:00.465901 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:25:00 crc kubenswrapper[4624]: E1008 14:25:00.566602 4624 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 08 14:25:01 crc kubenswrapper[4624]: I1008 14:25:01.465330 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:25:01 crc kubenswrapper[4624]: E1008 14:25:01.465468 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:25:01 crc kubenswrapper[4624]: I1008 14:25:01.465496 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:25:01 crc kubenswrapper[4624]: I1008 14:25:01.465780 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:25:01 crc kubenswrapper[4624]: E1008 14:25:01.465887 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:25:01 crc kubenswrapper[4624]: E1008 14:25:01.465916 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:25:01 crc kubenswrapper[4624]: I1008 14:25:01.466179 4624 scope.go:117] "RemoveContainer" containerID="1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c" Oct 08 14:25:02 crc kubenswrapper[4624]: I1008 14:25:02.030721 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovnkube-controller/3.log" Oct 08 14:25:02 crc kubenswrapper[4624]: I1008 14:25:02.033065 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerStarted","Data":"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415"} Oct 08 14:25:02 crc kubenswrapper[4624]: I1008 14:25:02.033436 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:25:02 crc kubenswrapper[4624]: I1008 14:25:02.465290 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:25:02 crc kubenswrapper[4624]: E1008 14:25:02.465522 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:25:02 crc kubenswrapper[4624]: I1008 14:25:02.525560 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podStartSLOduration=106.525538753 podStartE2EDuration="1m46.525538753s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:02.065497168 +0000 UTC m=+127.216432265" watchObservedRunningTime="2025-10-08 14:25:02.525538753 +0000 UTC m=+127.676473840" Oct 08 14:25:02 crc kubenswrapper[4624]: I1008 14:25:02.527074 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qrmz6"] Oct 08 14:25:03 crc kubenswrapper[4624]: I1008 14:25:03.036932 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:25:03 crc kubenswrapper[4624]: E1008 14:25:03.037069 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:25:03 crc kubenswrapper[4624]: I1008 14:25:03.465565 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:25:03 crc kubenswrapper[4624]: I1008 14:25:03.465701 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:25:03 crc kubenswrapper[4624]: E1008 14:25:03.465705 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:25:03 crc kubenswrapper[4624]: I1008 14:25:03.465744 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:25:03 crc kubenswrapper[4624]: E1008 14:25:03.465937 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:25:03 crc kubenswrapper[4624]: E1008 14:25:03.465788 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:25:04 crc kubenswrapper[4624]: I1008 14:25:04.465541 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:25:04 crc kubenswrapper[4624]: E1008 14:25:04.466044 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:25:05 crc kubenswrapper[4624]: I1008 14:25:05.465377 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:25:05 crc kubenswrapper[4624]: E1008 14:25:05.466649 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:25:05 crc kubenswrapper[4624]: I1008 14:25:05.466692 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:25:05 crc kubenswrapper[4624]: I1008 14:25:05.466792 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:25:05 crc kubenswrapper[4624]: E1008 14:25:05.466893 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:25:05 crc kubenswrapper[4624]: E1008 14:25:05.467116 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:25:05 crc kubenswrapper[4624]: I1008 14:25:05.467218 4624 scope.go:117] "RemoveContainer" containerID="1f3de1c5bf27d90b3b9f4560a59ec4a6153df4d625d85bda278779415ae96343" Oct 08 14:25:05 crc kubenswrapper[4624]: E1008 14:25:05.567128 4624 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 08 14:25:06 crc kubenswrapper[4624]: I1008 14:25:06.047720 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-47hzf_48aee8dd-6063-4d3c-b65a-f37ce1ccdb82/kube-multus/1.log" Oct 08 14:25:06 crc kubenswrapper[4624]: I1008 14:25:06.047770 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-47hzf" event={"ID":"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82","Type":"ContainerStarted","Data":"f14fc924f93f3772d58a3e79005fffb23253439f492c41baf67b02ee9d67f50b"} Oct 08 14:25:06 crc kubenswrapper[4624]: I1008 14:25:06.464704 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:25:06 crc kubenswrapper[4624]: E1008 14:25:06.465145 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:25:07 crc kubenswrapper[4624]: I1008 14:25:07.465202 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:25:07 crc kubenswrapper[4624]: I1008 14:25:07.465274 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:25:07 crc kubenswrapper[4624]: E1008 14:25:07.465334 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:25:07 crc kubenswrapper[4624]: I1008 14:25:07.465383 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:25:07 crc kubenswrapper[4624]: E1008 14:25:07.465520 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:25:07 crc kubenswrapper[4624]: E1008 14:25:07.465673 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:25:08 crc kubenswrapper[4624]: I1008 14:25:08.465705 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:25:08 crc kubenswrapper[4624]: E1008 14:25:08.465899 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:25:09 crc kubenswrapper[4624]: I1008 14:25:09.465729 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:25:09 crc kubenswrapper[4624]: I1008 14:25:09.465729 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:25:09 crc kubenswrapper[4624]: E1008 14:25:09.465897 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 14:25:09 crc kubenswrapper[4624]: E1008 14:25:09.466037 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 14:25:09 crc kubenswrapper[4624]: I1008 14:25:09.466509 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:25:09 crc kubenswrapper[4624]: E1008 14:25:09.466716 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 14:25:10 crc kubenswrapper[4624]: I1008 14:25:10.465533 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:25:10 crc kubenswrapper[4624]: E1008 14:25:10.465776 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qrmz6" podUID="8abf38af-8df3-49f9-9817-b4740d2a8b4a" Oct 08 14:25:11 crc kubenswrapper[4624]: I1008 14:25:11.464935 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:25:11 crc kubenswrapper[4624]: I1008 14:25:11.464966 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:25:11 crc kubenswrapper[4624]: I1008 14:25:11.464951 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:25:11 crc kubenswrapper[4624]: I1008 14:25:11.467019 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 08 14:25:11 crc kubenswrapper[4624]: I1008 14:25:11.467393 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 08 14:25:11 crc kubenswrapper[4624]: I1008 14:25:11.470898 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 08 14:25:11 crc kubenswrapper[4624]: I1008 14:25:11.473409 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 08 14:25:12 crc kubenswrapper[4624]: I1008 14:25:12.465430 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:25:12 crc kubenswrapper[4624]: I1008 14:25:12.467835 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 08 14:25:12 crc kubenswrapper[4624]: I1008 14:25:12.471339 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.854750 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.886421 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bbffn"] Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.887199 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" Oct 08 14:25:14 crc kubenswrapper[4624]: W1008 14:25:14.892341 4624 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Oct 08 14:25:14 crc kubenswrapper[4624]: E1008 14:25:14.892583 4624 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.892773 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.892852 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.893103 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.893320 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.893104 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.893650 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-d66t2"] Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.894233 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.895155 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cxb9h"] Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.895467 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp"] Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.895910 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.896272 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cxb9h" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.897343 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc"] Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.897745 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-kl87s"] Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.898088 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-kl87s" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.898143 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4k9mv"] Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.899145 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4k9mv" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.899281 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.899861 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.900198 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.900379 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.900491 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.906661 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-rgn2q"] Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.907300 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gkmv8"] Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.908021 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gkmv8" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.907514 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.910656 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9526z"] Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.911066 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9526z" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.923087 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.923577 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.923544 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.954201 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx2s6\" (UniqueName: \"kubernetes.io/projected/0f1c0e12-944e-4afe-a75d-fb0f25d3ac29-kube-api-access-wx2s6\") pod \"authentication-operator-69f744f599-d66t2\" (UID: \"0f1c0e12-944e-4afe-a75d-fb0f25d3ac29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.954565 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/100b758b-a285-49a0-a5ec-0b565dce5e1a-images\") pod \"machine-api-operator-5694c8668f-bbffn\" (UID: \"100b758b-a285-49a0-a5ec-0b565dce5e1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.954716 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/100b758b-a285-49a0-a5ec-0b565dce5e1a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bbffn\" (UID: \"100b758b-a285-49a0-a5ec-0b565dce5e1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.954795 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt6xj\" (UniqueName: \"kubernetes.io/projected/bb07e9fd-b48d-4a87-8be6-01b98e889cff-kube-api-access-wt6xj\") pod \"openshift-controller-manager-operator-756b6f6bc6-cxb9h\" (UID: \"bb07e9fd-b48d-4a87-8be6-01b98e889cff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cxb9h" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.954873 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9lhd\" (UniqueName: \"kubernetes.io/projected/60fc8c95-75b8-4032-b407-c0b21022da37-kube-api-access-s9lhd\") pod \"downloads-7954f5f757-kl87s\" (UID: \"60fc8c95-75b8-4032-b407-c0b21022da37\") " pod="openshift-console/downloads-7954f5f757-kl87s" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.954948 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94jx9\" (UniqueName: \"kubernetes.io/projected/c6bcc8a7-80c7-4b6b-8b40-db870b0ea3f0-kube-api-access-94jx9\") pod \"cluster-samples-operator-665b6dd947-4k9mv\" (UID: \"c6bcc8a7-80c7-4b6b-8b40-db870b0ea3f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4k9mv" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.955026 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f8eefe6-4147-465d-adda-f0ddf9530abb-config\") pod \"console-operator-58897d9998-gkmv8\" (UID: \"9f8eefe6-4147-465d-adda-f0ddf9530abb\") " pod="openshift-console-operator/console-operator-58897d9998-gkmv8" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.955093 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/82676f42-aabb-4cee-b836-790a48dd9a2e-console-serving-cert\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.955170 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nswfx\" (UniqueName: \"kubernetes.io/projected/100b758b-a285-49a0-a5ec-0b565dce5e1a-kube-api-access-nswfx\") pod \"machine-api-operator-5694c8668f-bbffn\" (UID: \"100b758b-a285-49a0-a5ec-0b565dce5e1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.955246 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/100b758b-a285-49a0-a5ec-0b565dce5e1a-config\") pod \"machine-api-operator-5694c8668f-bbffn\" (UID: \"100b758b-a285-49a0-a5ec-0b565dce5e1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.955320 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7dkf\" (UniqueName: \"kubernetes.io/projected/42ae4a09-81ca-465e-85e7-38a09497805c-kube-api-access-h7dkf\") pod \"openshift-config-operator-7777fb866f-ddrvp\" (UID: \"42ae4a09-81ca-465e-85e7-38a09497805c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.955385 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42ae4a09-81ca-465e-85e7-38a09497805c-serving-cert\") pod \"openshift-config-operator-7777fb866f-ddrvp\" (UID: \"42ae4a09-81ca-465e-85e7-38a09497805c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.955453 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38e66eda-8d84-40e4-9749-0f7759ddbd0d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9526z\" (UID: \"38e66eda-8d84-40e4-9749-0f7759ddbd0d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9526z" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.955520 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f1c0e12-944e-4afe-a75d-fb0f25d3ac29-config\") pod \"authentication-operator-69f744f599-d66t2\" (UID: \"0f1c0e12-944e-4afe-a75d-fb0f25d3ac29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.955596 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/42ae4a09-81ca-465e-85e7-38a09497805c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ddrvp\" (UID: \"42ae4a09-81ca-465e-85e7-38a09497805c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.955739 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f1c0e12-944e-4afe-a75d-fb0f25d3ac29-service-ca-bundle\") pod \"authentication-operator-69f744f599-d66t2\" (UID: \"0f1c0e12-944e-4afe-a75d-fb0f25d3ac29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.955814 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-service-ca\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.955887 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f1c0e12-944e-4afe-a75d-fb0f25d3ac29-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-d66t2\" (UID: \"0f1c0e12-944e-4afe-a75d-fb0f25d3ac29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.955966 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtf92\" (UniqueName: \"kubernetes.io/projected/82676f42-aabb-4cee-b836-790a48dd9a2e-kube-api-access-qtf92\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.956031 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb07e9fd-b48d-4a87-8be6-01b98e889cff-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cxb9h\" (UID: \"bb07e9fd-b48d-4a87-8be6-01b98e889cff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cxb9h" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.956099 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38e66eda-8d84-40e4-9749-0f7759ddbd0d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9526z\" (UID: \"38e66eda-8d84-40e4-9749-0f7759ddbd0d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9526z" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.956170 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48945eb6-75ee-4ed0-bc04-f83b5888d85d-client-ca\") pod \"route-controller-manager-6576b87f9c-gnrmc\" (UID: \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.956250 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-console-config\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.956321 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f8eefe6-4147-465d-adda-f0ddf9530abb-trusted-ca\") pod \"console-operator-58897d9998-gkmv8\" (UID: \"9f8eefe6-4147-465d-adda-f0ddf9530abb\") " pod="openshift-console-operator/console-operator-58897d9998-gkmv8" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.956400 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-trusted-ca-bundle\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.956487 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f8eefe6-4147-465d-adda-f0ddf9530abb-serving-cert\") pod \"console-operator-58897d9998-gkmv8\" (UID: \"9f8eefe6-4147-465d-adda-f0ddf9530abb\") " pod="openshift-console-operator/console-operator-58897d9998-gkmv8" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.956568 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-oauth-serving-cert\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.956657 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb07e9fd-b48d-4a87-8be6-01b98e889cff-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cxb9h\" (UID: \"bb07e9fd-b48d-4a87-8be6-01b98e889cff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cxb9h" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.956741 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhrng\" (UniqueName: \"kubernetes.io/projected/38e66eda-8d84-40e4-9749-0f7759ddbd0d-kube-api-access-vhrng\") pod \"openshift-apiserver-operator-796bbdcf4f-9526z\" (UID: \"38e66eda-8d84-40e4-9749-0f7759ddbd0d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9526z" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.956817 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48945eb6-75ee-4ed0-bc04-f83b5888d85d-config\") pod \"route-controller-manager-6576b87f9c-gnrmc\" (UID: \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.956897 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48945eb6-75ee-4ed0-bc04-f83b5888d85d-serving-cert\") pod \"route-controller-manager-6576b87f9c-gnrmc\" (UID: \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.956968 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hr6m\" (UniqueName: \"kubernetes.io/projected/48945eb6-75ee-4ed0-bc04-f83b5888d85d-kube-api-access-6hr6m\") pod \"route-controller-manager-6576b87f9c-gnrmc\" (UID: \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.957044 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f1c0e12-944e-4afe-a75d-fb0f25d3ac29-serving-cert\") pod \"authentication-operator-69f744f599-d66t2\" (UID: \"0f1c0e12-944e-4afe-a75d-fb0f25d3ac29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.957108 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw9jx\" (UniqueName: \"kubernetes.io/projected/9f8eefe6-4147-465d-adda-f0ddf9530abb-kube-api-access-nw9jx\") pod \"console-operator-58897d9998-gkmv8\" (UID: \"9f8eefe6-4147-465d-adda-f0ddf9530abb\") " pod="openshift-console-operator/console-operator-58897d9998-gkmv8" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.957178 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6bcc8a7-80c7-4b6b-8b40-db870b0ea3f0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4k9mv\" (UID: \"c6bcc8a7-80c7-4b6b-8b40-db870b0ea3f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4k9mv" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.957248 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/82676f42-aabb-4cee-b836-790a48dd9a2e-console-oauth-config\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.967045 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.967276 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.967116 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.967788 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.967963 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.968178 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.968321 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.968378 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.968745 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.968831 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.968765 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.969218 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.969571 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.969835 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.970184 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b8zl8"] Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.970754 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.979501 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.987581 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.989894 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.990388 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.992840 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.993211 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.993380 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.993452 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.996967 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.996981 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 08 14:25:14 crc kubenswrapper[4624]: I1008 14:25:14.997026 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.052853 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.053162 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.053308 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.052914 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.053506 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.052958 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.052982 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.053023 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.053074 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.054214 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.055275 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.060036 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ljq2l"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.055589 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.055711 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.069905 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f1c0e12-944e-4afe-a75d-fb0f25d3ac29-service-ca-bundle\") pod \"authentication-operator-69f744f599-d66t2\" (UID: \"0f1c0e12-944e-4afe-a75d-fb0f25d3ac29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.072662 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-service-ca\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.072809 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.072936 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f1c0e12-944e-4afe-a75d-fb0f25d3ac29-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-d66t2\" (UID: \"0f1c0e12-944e-4afe-a75d-fb0f25d3ac29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.073030 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.073134 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.073278 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtf92\" (UniqueName: \"kubernetes.io/projected/82676f42-aabb-4cee-b836-790a48dd9a2e-kube-api-access-qtf92\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.074327 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48945eb6-75ee-4ed0-bc04-f83b5888d85d-client-ca\") pod \"route-controller-manager-6576b87f9c-gnrmc\" (UID: \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.074490 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-console-config\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.074664 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb07e9fd-b48d-4a87-8be6-01b98e889cff-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cxb9h\" (UID: \"bb07e9fd-b48d-4a87-8be6-01b98e889cff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cxb9h" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.074780 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.074198 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-service-ca\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.074988 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38e66eda-8d84-40e4-9749-0f7759ddbd0d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9526z\" (UID: \"38e66eda-8d84-40e4-9749-0f7759ddbd0d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9526z" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.061511 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.075423 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.070524 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f1c0e12-944e-4afe-a75d-fb0f25d3ac29-service-ca-bundle\") pod \"authentication-operator-69f744f599-d66t2\" (UID: \"0f1c0e12-944e-4afe-a75d-fb0f25d3ac29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.077300 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f8eefe6-4147-465d-adda-f0ddf9530abb-serving-cert\") pod \"console-operator-58897d9998-gkmv8\" (UID: \"9f8eefe6-4147-465d-adda-f0ddf9530abb\") " pod="openshift-console-operator/console-operator-58897d9998-gkmv8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.077336 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f8eefe6-4147-465d-adda-f0ddf9530abb-trusted-ca\") pod \"console-operator-58897d9998-gkmv8\" (UID: \"9f8eefe6-4147-465d-adda-f0ddf9530abb\") " pod="openshift-console-operator/console-operator-58897d9998-gkmv8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.077916 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb07e9fd-b48d-4a87-8be6-01b98e889cff-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cxb9h\" (UID: \"bb07e9fd-b48d-4a87-8be6-01b98e889cff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cxb9h" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.079888 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.087863 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.077356 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-trusted-ca-bundle\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.088776 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-oauth-serving-cert\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.088802 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb07e9fd-b48d-4a87-8be6-01b98e889cff-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cxb9h\" (UID: \"bb07e9fd-b48d-4a87-8be6-01b98e889cff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cxb9h" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.088828 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhrng\" (UniqueName: \"kubernetes.io/projected/38e66eda-8d84-40e4-9749-0f7759ddbd0d-kube-api-access-vhrng\") pod \"openshift-apiserver-operator-796bbdcf4f-9526z\" (UID: \"38e66eda-8d84-40e4-9749-0f7759ddbd0d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9526z" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.088851 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48945eb6-75ee-4ed0-bc04-f83b5888d85d-config\") pod \"route-controller-manager-6576b87f9c-gnrmc\" (UID: \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.090442 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-oauth-serving-cert\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.090543 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-console-config\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.090561 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-audit-policies\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.077517 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38e66eda-8d84-40e4-9749-0f7759ddbd0d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9526z\" (UID: \"38e66eda-8d84-40e4-9749-0f7759ddbd0d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9526z" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.091301 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.091981 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48945eb6-75ee-4ed0-bc04-f83b5888d85d-config\") pod \"route-controller-manager-6576b87f9c-gnrmc\" (UID: \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.092197 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.092477 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qqw9b"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.092762 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xbrc2"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.093064 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.094056 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.094244 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.094442 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48945eb6-75ee-4ed0-bc04-f83b5888d85d-serving-cert\") pod \"route-controller-manager-6576b87f9c-gnrmc\" (UID: \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.094472 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hr6m\" (UniqueName: \"kubernetes.io/projected/48945eb6-75ee-4ed0-bc04-f83b5888d85d-kube-api-access-6hr6m\") pod \"route-controller-manager-6576b87f9c-gnrmc\" (UID: \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.094256 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.094529 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw9jx\" (UniqueName: \"kubernetes.io/projected/9f8eefe6-4147-465d-adda-f0ddf9530abb-kube-api-access-nw9jx\") pod \"console-operator-58897d9998-gkmv8\" (UID: \"9f8eefe6-4147-465d-adda-f0ddf9530abb\") " pod="openshift-console-operator/console-operator-58897d9998-gkmv8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.094551 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.095400 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48945eb6-75ee-4ed0-bc04-f83b5888d85d-client-ca\") pod \"route-controller-manager-6576b87f9c-gnrmc\" (UID: \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.098224 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f1c0e12-944e-4afe-a75d-fb0f25d3ac29-serving-cert\") pod \"authentication-operator-69f744f599-d66t2\" (UID: \"0f1c0e12-944e-4afe-a75d-fb0f25d3ac29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.098309 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.098351 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6bcc8a7-80c7-4b6b-8b40-db870b0ea3f0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4k9mv\" (UID: \"c6bcc8a7-80c7-4b6b-8b40-db870b0ea3f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4k9mv" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.108467 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f8eefe6-4147-465d-adda-f0ddf9530abb-serving-cert\") pod \"console-operator-58897d9998-gkmv8\" (UID: \"9f8eefe6-4147-465d-adda-f0ddf9530abb\") " pod="openshift-console-operator/console-operator-58897d9998-gkmv8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.108627 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/82676f42-aabb-4cee-b836-790a48dd9a2e-console-oauth-config\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.109710 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.110149 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.110291 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.110400 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.118545 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.119081 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-w755s"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.119392 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f1c0e12-944e-4afe-a75d-fb0f25d3ac29-serving-cert\") pod \"authentication-operator-69f744f599-d66t2\" (UID: \"0f1c0e12-944e-4afe-a75d-fb0f25d3ac29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.119530 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx2s6\" (UniqueName: \"kubernetes.io/projected/0f1c0e12-944e-4afe-a75d-fb0f25d3ac29-kube-api-access-wx2s6\") pod \"authentication-operator-69f744f599-d66t2\" (UID: \"0f1c0e12-944e-4afe-a75d-fb0f25d3ac29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.119562 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/100b758b-a285-49a0-a5ec-0b565dce5e1a-images\") pod \"machine-api-operator-5694c8668f-bbffn\" (UID: \"100b758b-a285-49a0-a5ec-0b565dce5e1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.119593 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w755s" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.119875 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.119600 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt6xj\" (UniqueName: \"kubernetes.io/projected/bb07e9fd-b48d-4a87-8be6-01b98e889cff-kube-api-access-wt6xj\") pod \"openshift-controller-manager-operator-756b6f6bc6-cxb9h\" (UID: \"bb07e9fd-b48d-4a87-8be6-01b98e889cff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cxb9h" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.120137 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/100b758b-a285-49a0-a5ec-0b565dce5e1a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bbffn\" (UID: \"100b758b-a285-49a0-a5ec-0b565dce5e1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.120171 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94jx9\" (UniqueName: \"kubernetes.io/projected/c6bcc8a7-80c7-4b6b-8b40-db870b0ea3f0-kube-api-access-94jx9\") pod \"cluster-samples-operator-665b6dd947-4k9mv\" (UID: \"c6bcc8a7-80c7-4b6b-8b40-db870b0ea3f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4k9mv" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.120202 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9lhd\" (UniqueName: \"kubernetes.io/projected/60fc8c95-75b8-4032-b407-c0b21022da37-kube-api-access-s9lhd\") pod \"downloads-7954f5f757-kl87s\" (UID: \"60fc8c95-75b8-4032-b407-c0b21022da37\") " pod="openshift-console/downloads-7954f5f757-kl87s" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.120229 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vggrv\" (UniqueName: \"kubernetes.io/projected/03313d2c-4c05-4a55-a595-cf633b935c29-kube-api-access-vggrv\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.120261 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.120291 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f8eefe6-4147-465d-adda-f0ddf9530abb-config\") pod \"console-operator-58897d9998-gkmv8\" (UID: \"9f8eefe6-4147-465d-adda-f0ddf9530abb\") " pod="openshift-console-operator/console-operator-58897d9998-gkmv8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.120315 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/82676f42-aabb-4cee-b836-790a48dd9a2e-console-serving-cert\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.120338 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nswfx\" (UniqueName: \"kubernetes.io/projected/100b758b-a285-49a0-a5ec-0b565dce5e1a-kube-api-access-nswfx\") pod \"machine-api-operator-5694c8668f-bbffn\" (UID: \"100b758b-a285-49a0-a5ec-0b565dce5e1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.120363 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03313d2c-4c05-4a55-a595-cf633b935c29-audit-dir\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.120391 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/100b758b-a285-49a0-a5ec-0b565dce5e1a-images\") pod \"machine-api-operator-5694c8668f-bbffn\" (UID: \"100b758b-a285-49a0-a5ec-0b565dce5e1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.121582 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f8eefe6-4147-465d-adda-f0ddf9530abb-config\") pod \"console-operator-58897d9998-gkmv8\" (UID: \"9f8eefe6-4147-465d-adda-f0ddf9530abb\") " pod="openshift-console-operator/console-operator-58897d9998-gkmv8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.122516 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48945eb6-75ee-4ed0-bc04-f83b5888d85d-serving-cert\") pod \"route-controller-manager-6576b87f9c-gnrmc\" (UID: \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.124300 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6bcc8a7-80c7-4b6b-8b40-db870b0ea3f0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4k9mv\" (UID: \"c6bcc8a7-80c7-4b6b-8b40-db870b0ea3f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4k9mv" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.130328 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/82676f42-aabb-4cee-b836-790a48dd9a2e-console-oauth-config\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.130719 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.130812 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.130873 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/100b758b-a285-49a0-a5ec-0b565dce5e1a-config\") pod \"machine-api-operator-5694c8668f-bbffn\" (UID: \"100b758b-a285-49a0-a5ec-0b565dce5e1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.130910 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7dkf\" (UniqueName: \"kubernetes.io/projected/42ae4a09-81ca-465e-85e7-38a09497805c-kube-api-access-h7dkf\") pod \"openshift-config-operator-7777fb866f-ddrvp\" (UID: \"42ae4a09-81ca-465e-85e7-38a09497805c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.130939 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.130985 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42ae4a09-81ca-465e-85e7-38a09497805c-serving-cert\") pod \"openshift-config-operator-7777fb866f-ddrvp\" (UID: \"42ae4a09-81ca-465e-85e7-38a09497805c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.131013 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.131048 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38e66eda-8d84-40e4-9749-0f7759ddbd0d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9526z\" (UID: \"38e66eda-8d84-40e4-9749-0f7759ddbd0d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9526z" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.131075 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f1c0e12-944e-4afe-a75d-fb0f25d3ac29-config\") pod \"authentication-operator-69f744f599-d66t2\" (UID: \"0f1c0e12-944e-4afe-a75d-fb0f25d3ac29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.131106 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.131120 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/42ae4a09-81ca-465e-85e7-38a09497805c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ddrvp\" (UID: \"42ae4a09-81ca-465e-85e7-38a09497805c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.131600 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/42ae4a09-81ca-465e-85e7-38a09497805c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ddrvp\" (UID: \"42ae4a09-81ca-465e-85e7-38a09497805c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.138317 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/100b758b-a285-49a0-a5ec-0b565dce5e1a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bbffn\" (UID: \"100b758b-a285-49a0-a5ec-0b565dce5e1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.138497 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.138338 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb07e9fd-b48d-4a87-8be6-01b98e889cff-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cxb9h\" (UID: \"bb07e9fd-b48d-4a87-8be6-01b98e889cff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cxb9h" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.138918 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.139438 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.139688 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f1c0e12-944e-4afe-a75d-fb0f25d3ac29-config\") pod \"authentication-operator-69f744f599-d66t2\" (UID: \"0f1c0e12-944e-4afe-a75d-fb0f25d3ac29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.139785 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.140607 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.144488 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/82676f42-aabb-4cee-b836-790a48dd9a2e-console-serving-cert\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.146586 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42ae4a09-81ca-465e-85e7-38a09497805c-serving-cert\") pod \"openshift-config-operator-7777fb866f-ddrvp\" (UID: \"42ae4a09-81ca-465e-85e7-38a09497805c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.148241 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38e66eda-8d84-40e4-9749-0f7759ddbd0d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9526z\" (UID: \"38e66eda-8d84-40e4-9749-0f7759ddbd0d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9526z" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.155764 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.156372 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.161220 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.161249 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.161732 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.161861 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.161942 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.161984 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.162077 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.162118 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.161859 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.162189 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.162262 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.162388 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.162553 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.162624 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.162675 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.162717 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.162780 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.162081 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.162959 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.163036 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.163082 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.163131 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.163506 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.163596 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.163042 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.162960 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.163711 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.163747 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.163775 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.163856 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.163862 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.163933 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.164055 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.164055 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.164716 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.165591 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qr5w8"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.167164 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.170236 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.170967 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.171094 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.171198 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.171304 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.171413 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.172394 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.172690 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-trusted-ca-bundle\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.174961 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.196092 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.197154 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtf92\" (UniqueName: \"kubernetes.io/projected/82676f42-aabb-4cee-b836-790a48dd9a2e-kube-api-access-qtf92\") pod \"console-f9d7485db-rgn2q\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.197820 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hr6m\" (UniqueName: \"kubernetes.io/projected/48945eb6-75ee-4ed0-bc04-f83b5888d85d-kube-api-access-6hr6m\") pod \"route-controller-manager-6576b87f9c-gnrmc\" (UID: \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.201132 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4jmhk"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.201736 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x6fbb"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.202036 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.203812 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.209897 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f8eefe6-4147-465d-adda-f0ddf9530abb-trusted-ca\") pod \"console-operator-58897d9998-gkmv8\" (UID: \"9f8eefe6-4147-465d-adda-f0ddf9530abb\") " pod="openshift-console-operator/console-operator-58897d9998-gkmv8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.225131 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f1c0e12-944e-4afe-a75d-fb0f25d3ac29-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-d66t2\" (UID: \"0f1c0e12-944e-4afe-a75d-fb0f25d3ac29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.225910 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4jmhk" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.225962 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.227660 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.227705 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-97pqq"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.227789 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.228272 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nzqkt"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.228673 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nzqkt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.228842 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.228920 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-97pqq" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.229076 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4rhxp"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.229769 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4rhxp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.231759 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebee26eb-b32e-459d-b4d2-7a36a325f08b-client-ca\") pod \"controller-manager-879f6c89f-qqw9b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.231870 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vggrv\" (UniqueName: \"kubernetes.io/projected/03313d2c-4c05-4a55-a595-cf633b935c29-kube-api-access-vggrv\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.232039 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a9627e3-628f-42ee-b6c7-e203097ad785-proxy-tls\") pod \"machine-config-operator-74547568cd-n48bp\" (UID: \"6a9627e3-628f-42ee-b6c7-e203097ad785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.232069 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3c901fd1-c23c-47f0-b44f-102a1965abd6-node-pullsecrets\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.232157 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.233397 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.232188 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2ca17841-28c4-4a05-995c-6324cea0ffbd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rtl4z\" (UID: \"2ca17841-28c4-4a05-995c-6324cea0ffbd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.233483 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpdlr\" (UniqueName: \"kubernetes.io/projected/809bf602-1dba-4d74-8f71-18add0de807a-kube-api-access-gpdlr\") pod \"packageserver-d55dfcdfc-5q99v\" (UID: \"809bf602-1dba-4d74-8f71-18add0de807a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.233553 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/809bf602-1dba-4d74-8f71-18add0de807a-tmpfs\") pod \"packageserver-d55dfcdfc-5q99v\" (UID: \"809bf602-1dba-4d74-8f71-18add0de807a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.233617 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03313d2c-4c05-4a55-a595-cf633b935c29-audit-dir\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.233687 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/809bf602-1dba-4d74-8f71-18add0de807a-apiservice-cert\") pod \"packageserver-d55dfcdfc-5q99v\" (UID: \"809bf602-1dba-4d74-8f71-18add0de807a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.233755 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03313d2c-4c05-4a55-a595-cf633b935c29-audit-dir\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.233806 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.233862 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.234446 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.234446 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.234862 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/df49d2ff-640d-4cf1-812c-8b7275df6292-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.234906 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.234933 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a9627e3-628f-42ee-b6c7-e203097ad785-auth-proxy-config\") pod \"machine-config-operator-74547568cd-n48bp\" (UID: \"6a9627e3-628f-42ee-b6c7-e203097ad785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.234971 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.235525 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.235572 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3c901fd1-c23c-47f0-b44f-102a1965abd6-etcd-serving-ca\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.235607 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p44rn\" (UniqueName: \"kubernetes.io/projected/c95a7fea-e442-4f6f-b6df-0b8f62c5a13f-kube-api-access-p44rn\") pod \"machine-config-controller-84d6567774-w755s\" (UID: \"c95a7fea-e442-4f6f-b6df-0b8f62c5a13f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w755s" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.235651 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e63c9d66-5727-4750-a48d-f55d5d6358a1-etcd-ca\") pod \"etcd-operator-b45778765-xbrc2\" (UID: \"e63c9d66-5727-4750-a48d-f55d5d6358a1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.235674 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df0cea7a-1f29-4af5-b8f1-2e1c8873610c-secret-volume\") pod \"collect-profiles-29332215-bxfll\" (UID: \"df0cea7a-1f29-4af5-b8f1-2e1c8873610c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.235696 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df49d2ff-640d-4cf1-812c-8b7275df6292-audit-dir\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.235732 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.235752 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c901fd1-c23c-47f0-b44f-102a1965abd6-config\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.235801 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3c901fd1-c23c-47f0-b44f-102a1965abd6-audit\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.235828 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.235853 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/df49d2ff-640d-4cf1-812c-8b7275df6292-etcd-client\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.235876 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m48r\" (UniqueName: \"kubernetes.io/projected/2ca17841-28c4-4a05-995c-6324cea0ffbd-kube-api-access-6m48r\") pod \"cluster-image-registry-operator-dc59b4c8b-rtl4z\" (UID: \"2ca17841-28c4-4a05-995c-6324cea0ffbd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.235896 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.235915 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvhw2\" (UniqueName: \"kubernetes.io/projected/6a9627e3-628f-42ee-b6c7-e203097ad785-kube-api-access-tvhw2\") pod \"machine-config-operator-74547568cd-n48bp\" (UID: \"6a9627e3-628f-42ee-b6c7-e203097ad785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.235935 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c901fd1-c23c-47f0-b44f-102a1965abd6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.235968 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.235987 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df49d2ff-640d-4cf1-812c-8b7275df6292-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236001 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d1e4ee61-1c1c-41eb-82a7-42b6313d863c-machine-approver-tls\") pod \"machine-approver-56656f9798-7fps5\" (UID: \"d1e4ee61-1c1c-41eb-82a7-42b6313d863c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236017 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ca17841-28c4-4a05-995c-6324cea0ffbd-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rtl4z\" (UID: \"2ca17841-28c4-4a05-995c-6324cea0ffbd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236032 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ca17841-28c4-4a05-995c-6324cea0ffbd-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rtl4z\" (UID: \"2ca17841-28c4-4a05-995c-6324cea0ffbd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236047 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebee26eb-b32e-459d-b4d2-7a36a325f08b-config\") pod \"controller-manager-879f6c89f-qqw9b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236067 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3c901fd1-c23c-47f0-b44f-102a1965abd6-encryption-config\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236085 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c901fd1-c23c-47f0-b44f-102a1965abd6-audit-dir\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236101 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e63c9d66-5727-4750-a48d-f55d5d6358a1-config\") pod \"etcd-operator-b45778765-xbrc2\" (UID: \"e63c9d66-5727-4750-a48d-f55d5d6358a1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236125 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df49d2ff-640d-4cf1-812c-8b7275df6292-serving-cert\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236183 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df0cea7a-1f29-4af5-b8f1-2e1c8873610c-config-volume\") pod \"collect-profiles-29332215-bxfll\" (UID: \"df0cea7a-1f29-4af5-b8f1-2e1c8873610c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236199 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k6tk\" (UniqueName: \"kubernetes.io/projected/ebee26eb-b32e-459d-b4d2-7a36a325f08b-kube-api-access-5k6tk\") pod \"controller-manager-879f6c89f-qqw9b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236218 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-audit-policies\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236233 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zcfr\" (UniqueName: \"kubernetes.io/projected/e63c9d66-5727-4750-a48d-f55d5d6358a1-kube-api-access-9zcfr\") pod \"etcd-operator-b45778765-xbrc2\" (UID: \"e63c9d66-5727-4750-a48d-f55d5d6358a1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236251 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6a9627e3-628f-42ee-b6c7-e203097ad785-images\") pod \"machine-config-operator-74547568cd-n48bp\" (UID: \"6a9627e3-628f-42ee-b6c7-e203097ad785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236267 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/df49d2ff-640d-4cf1-812c-8b7275df6292-encryption-config\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236283 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236303 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c95a7fea-e442-4f6f-b6df-0b8f62c5a13f-proxy-tls\") pod \"machine-config-controller-84d6567774-w755s\" (UID: \"c95a7fea-e442-4f6f-b6df-0b8f62c5a13f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w755s" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236321 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e63c9d66-5727-4750-a48d-f55d5d6358a1-etcd-service-ca\") pod \"etcd-operator-b45778765-xbrc2\" (UID: \"e63c9d66-5727-4750-a48d-f55d5d6358a1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236346 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tx75\" (UniqueName: \"kubernetes.io/projected/df0cea7a-1f29-4af5-b8f1-2e1c8873610c-kube-api-access-7tx75\") pod \"collect-profiles-29332215-bxfll\" (UID: \"df0cea7a-1f29-4af5-b8f1-2e1c8873610c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236360 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d1e4ee61-1c1c-41eb-82a7-42b6313d863c-auth-proxy-config\") pod \"machine-approver-56656f9798-7fps5\" (UID: \"d1e4ee61-1c1c-41eb-82a7-42b6313d863c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236374 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebee26eb-b32e-459d-b4d2-7a36a325f08b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qqw9b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236389 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/809bf602-1dba-4d74-8f71-18add0de807a-webhook-cert\") pod \"packageserver-d55dfcdfc-5q99v\" (UID: \"809bf602-1dba-4d74-8f71-18add0de807a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236409 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236423 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3c901fd1-c23c-47f0-b44f-102a1965abd6-image-import-ca\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236440 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc2pc\" (UniqueName: \"kubernetes.io/projected/d1e4ee61-1c1c-41eb-82a7-42b6313d863c-kube-api-access-hc2pc\") pod \"machine-approver-56656f9798-7fps5\" (UID: \"d1e4ee61-1c1c-41eb-82a7-42b6313d863c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236454 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c901fd1-c23c-47f0-b44f-102a1965abd6-serving-cert\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236469 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dnpb\" (UniqueName: \"kubernetes.io/projected/3c901fd1-c23c-47f0-b44f-102a1965abd6-kube-api-access-2dnpb\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236492 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e63c9d66-5727-4750-a48d-f55d5d6358a1-etcd-client\") pod \"etcd-operator-b45778765-xbrc2\" (UID: \"e63c9d66-5727-4750-a48d-f55d5d6358a1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236505 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df49d2ff-640d-4cf1-812c-8b7275df6292-audit-policies\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236520 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e4ee61-1c1c-41eb-82a7-42b6313d863c-config\") pod \"machine-approver-56656f9798-7fps5\" (UID: \"d1e4ee61-1c1c-41eb-82a7-42b6313d863c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236539 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c95a7fea-e442-4f6f-b6df-0b8f62c5a13f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-w755s\" (UID: \"c95a7fea-e442-4f6f-b6df-0b8f62c5a13f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w755s" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236569 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4qh4\" (UniqueName: \"kubernetes.io/projected/df49d2ff-640d-4cf1-812c-8b7275df6292-kube-api-access-g4qh4\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236643 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3c901fd1-c23c-47f0-b44f-102a1965abd6-etcd-client\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236658 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebee26eb-b32e-459d-b4d2-7a36a325f08b-serving-cert\") pod \"controller-manager-879f6c89f-qqw9b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.236680 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e63c9d66-5727-4750-a48d-f55d5d6358a1-serving-cert\") pod \"etcd-operator-b45778765-xbrc2\" (UID: \"e63c9d66-5727-4750-a48d-f55d5d6358a1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.237594 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.238299 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-audit-policies\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.238532 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.239166 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.240602 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.243242 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ddpjz"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.244700 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hbk6h"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.245236 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hbk6h" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.245360 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9k2sh"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.245511 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-ddpjz" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.246306 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9k2sh" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.247075 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.248132 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xxrs"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.248955 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhrng\" (UniqueName: \"kubernetes.io/projected/38e66eda-8d84-40e4-9749-0f7759ddbd0d-kube-api-access-vhrng\") pod \"openshift-apiserver-operator-796bbdcf4f-9526z\" (UID: \"38e66eda-8d84-40e4-9749-0f7759ddbd0d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9526z" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.250472 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xxrs" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.251051 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.251080 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hrd59"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.251494 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hrd59" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.252275 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.252357 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw9jx\" (UniqueName: \"kubernetes.io/projected/9f8eefe6-4147-465d-adda-f0ddf9530abb-kube-api-access-nw9jx\") pod \"console-operator-58897d9998-gkmv8\" (UID: \"9f8eefe6-4147-465d-adda-f0ddf9530abb\") " pod="openshift-console-operator/console-operator-58897d9998-gkmv8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.252548 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.252620 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.253043 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-fbw8l"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.254046 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.254325 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.254990 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.258277 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.265765 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.266333 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.266886 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.267366 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.269330 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-khxjg"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.270243 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-khxjg" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.273819 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.279363 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bbffn"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.279403 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.279874 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-d66t2"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.279947 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.283283 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.284499 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-kl87s"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.285672 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.286973 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4k9mv"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.288018 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gkmv8"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.288582 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt6xj\" (UniqueName: \"kubernetes.io/projected/bb07e9fd-b48d-4a87-8be6-01b98e889cff-kube-api-access-wt6xj\") pod \"openshift-controller-manager-operator-756b6f6bc6-cxb9h\" (UID: \"bb07e9fd-b48d-4a87-8be6-01b98e889cff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cxb9h" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.289371 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gkmv8" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.292981 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qqw9b"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.299620 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.300576 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cxb9h"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.301681 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4jmhk"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.303085 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rgn2q"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.306571 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.310255 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-w755s"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.311712 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9526z" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.313279 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.314742 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx2s6\" (UniqueName: \"kubernetes.io/projected/0f1c0e12-944e-4afe-a75d-fb0f25d3ac29-kube-api-access-wx2s6\") pod \"authentication-operator-69f744f599-d66t2\" (UID: \"0f1c0e12-944e-4afe-a75d-fb0f25d3ac29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.315884 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.339094 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85911902-d86c-48de-b04f-e12b5885a05c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4bx6b\" (UID: \"85911902-d86c-48de-b04f-e12b5885a05c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.339405 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ddpjz"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.339418 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c901fd1-c23c-47f0-b44f-102a1965abd6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.339469 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m48r\" (UniqueName: \"kubernetes.io/projected/2ca17841-28c4-4a05-995c-6324cea0ffbd-kube-api-access-6m48r\") pod \"cluster-image-registry-operator-dc59b4c8b-rtl4z\" (UID: \"2ca17841-28c4-4a05-995c-6324cea0ffbd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.339501 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3a80e27-d7fd-4b62-b5ae-9719c4f69655-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hrd59\" (UID: \"d3a80e27-d7fd-4b62-b5ae-9719c4f69655\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hrd59" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.339729 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fba88879-d200-4b89-a22c-5b83be19012b-signing-cabundle\") pod \"service-ca-9c57cc56f-ddpjz\" (UID: \"fba88879-d200-4b89-a22c-5b83be19012b\") " pod="openshift-service-ca/service-ca-9c57cc56f-ddpjz" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.339764 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df49d2ff-640d-4cf1-812c-8b7275df6292-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.339786 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d1e4ee61-1c1c-41eb-82a7-42b6313d863c-machine-approver-tls\") pod \"machine-approver-56656f9798-7fps5\" (UID: \"d1e4ee61-1c1c-41eb-82a7-42b6313d863c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.339805 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ca17841-28c4-4a05-995c-6324cea0ffbd-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rtl4z\" (UID: \"2ca17841-28c4-4a05-995c-6324cea0ffbd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.339824 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebee26eb-b32e-459d-b4d2-7a36a325f08b-config\") pod \"controller-manager-879f6c89f-qqw9b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.339846 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d96a6e0b-5e82-48ad-b930-b7b82620830a-default-certificate\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.339910 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c901fd1-c23c-47f0-b44f-102a1965abd6-audit-dir\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.340410 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df49d2ff-640d-4cf1-812c-8b7275df6292-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.343827 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c901fd1-c23c-47f0-b44f-102a1965abd6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.343924 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c901fd1-c23c-47f0-b44f-102a1965abd6-audit-dir\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.343969 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k6tk\" (UniqueName: \"kubernetes.io/projected/ebee26eb-b32e-459d-b4d2-7a36a325f08b-kube-api-access-5k6tk\") pod \"controller-manager-879f6c89f-qqw9b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.343996 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6a9627e3-628f-42ee-b6c7-e203097ad785-images\") pod \"machine-config-operator-74547568cd-n48bp\" (UID: \"6a9627e3-628f-42ee-b6c7-e203097ad785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344022 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/72ec1d6f-5923-45ed-a5a8-ad8a268faca5-srv-cert\") pod \"catalog-operator-68c6474976-h9k57\" (UID: \"72ec1d6f-5923-45ed-a5a8-ad8a268faca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344074 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e63c9d66-5727-4750-a48d-f55d5d6358a1-etcd-service-ca\") pod \"etcd-operator-b45778765-xbrc2\" (UID: \"e63c9d66-5727-4750-a48d-f55d5d6358a1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344103 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d1e4ee61-1c1c-41eb-82a7-42b6313d863c-auth-proxy-config\") pod \"machine-approver-56656f9798-7fps5\" (UID: \"d1e4ee61-1c1c-41eb-82a7-42b6313d863c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344126 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebee26eb-b32e-459d-b4d2-7a36a325f08b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qqw9b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344148 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/809bf602-1dba-4d74-8f71-18add0de807a-webhook-cert\") pod \"packageserver-d55dfcdfc-5q99v\" (UID: \"809bf602-1dba-4d74-8f71-18add0de807a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344171 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc2pc\" (UniqueName: \"kubernetes.io/projected/d1e4ee61-1c1c-41eb-82a7-42b6313d863c-kube-api-access-hc2pc\") pod \"machine-approver-56656f9798-7fps5\" (UID: \"d1e4ee61-1c1c-41eb-82a7-42b6313d863c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344228 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c00e7c1b-2054-427e-ad58-03b97f0ceb83-metrics-tls\") pod \"dns-operator-744455d44c-97pqq\" (UID: \"c00e7c1b-2054-427e-ad58-03b97f0ceb83\") " pod="openshift-dns-operator/dns-operator-744455d44c-97pqq" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344256 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fba88879-d200-4b89-a22c-5b83be19012b-signing-key\") pod \"service-ca-9c57cc56f-ddpjz\" (UID: \"fba88879-d200-4b89-a22c-5b83be19012b\") " pod="openshift-service-ca/service-ca-9c57cc56f-ddpjz" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344287 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c95a7fea-e442-4f6f-b6df-0b8f62c5a13f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-w755s\" (UID: \"c95a7fea-e442-4f6f-b6df-0b8f62c5a13f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w755s" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344314 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4qh4\" (UniqueName: \"kubernetes.io/projected/df49d2ff-640d-4cf1-812c-8b7275df6292-kube-api-access-g4qh4\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344340 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt6z2\" (UniqueName: \"kubernetes.io/projected/c00e7c1b-2054-427e-ad58-03b97f0ceb83-kube-api-access-rt6z2\") pod \"dns-operator-744455d44c-97pqq\" (UID: \"c00e7c1b-2054-427e-ad58-03b97f0ceb83\") " pod="openshift-dns-operator/dns-operator-744455d44c-97pqq" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344382 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebee26eb-b32e-459d-b4d2-7a36a325f08b-client-ca\") pod \"controller-manager-879f6c89f-qqw9b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344406 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d96a6e0b-5e82-48ad-b930-b7b82620830a-stats-auth\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344432 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fmmk\" (UniqueName: \"kubernetes.io/projected/d96a6e0b-5e82-48ad-b930-b7b82620830a-kube-api-access-7fmmk\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344458 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/809bf602-1dba-4d74-8f71-18add0de807a-apiservice-cert\") pod \"packageserver-d55dfcdfc-5q99v\" (UID: \"809bf602-1dba-4d74-8f71-18add0de807a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344484 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/df49d2ff-640d-4cf1-812c-8b7275df6292-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344507 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d96a6e0b-5e82-48ad-b930-b7b82620830a-service-ca-bundle\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344527 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a9627e3-628f-42ee-b6c7-e203097ad785-auth-proxy-config\") pod \"machine-config-operator-74547568cd-n48bp\" (UID: \"6a9627e3-628f-42ee-b6c7-e203097ad785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344562 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df49d2ff-640d-4cf1-812c-8b7275df6292-audit-dir\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344598 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c901fd1-c23c-47f0-b44f-102a1965abd6-config\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344622 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvhw2\" (UniqueName: \"kubernetes.io/projected/6a9627e3-628f-42ee-b6c7-e203097ad785-kube-api-access-tvhw2\") pod \"machine-config-operator-74547568cd-n48bp\" (UID: \"6a9627e3-628f-42ee-b6c7-e203097ad785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344661 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/df49d2ff-640d-4cf1-812c-8b7275df6292-etcd-client\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344696 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ca17841-28c4-4a05-995c-6324cea0ffbd-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rtl4z\" (UID: \"2ca17841-28c4-4a05-995c-6324cea0ffbd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344719 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/72ec1d6f-5923-45ed-a5a8-ad8a268faca5-profile-collector-cert\") pod \"catalog-operator-68c6474976-h9k57\" (UID: \"72ec1d6f-5923-45ed-a5a8-ad8a268faca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344744 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3c901fd1-c23c-47f0-b44f-102a1965abd6-encryption-config\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344764 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q4fw\" (UniqueName: \"kubernetes.io/projected/72ec1d6f-5923-45ed-a5a8-ad8a268faca5-kube-api-access-4q4fw\") pod \"catalog-operator-68c6474976-h9k57\" (UID: \"72ec1d6f-5923-45ed-a5a8-ad8a268faca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344788 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e63c9d66-5727-4750-a48d-f55d5d6358a1-config\") pod \"etcd-operator-b45778765-xbrc2\" (UID: \"e63c9d66-5727-4750-a48d-f55d5d6358a1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344813 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df49d2ff-640d-4cf1-812c-8b7275df6292-serving-cert\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344833 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df0cea7a-1f29-4af5-b8f1-2e1c8873610c-config-volume\") pod \"collect-profiles-29332215-bxfll\" (UID: \"df0cea7a-1f29-4af5-b8f1-2e1c8873610c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344854 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zcfr\" (UniqueName: \"kubernetes.io/projected/e63c9d66-5727-4750-a48d-f55d5d6358a1-kube-api-access-9zcfr\") pod \"etcd-operator-b45778765-xbrc2\" (UID: \"e63c9d66-5727-4750-a48d-f55d5d6358a1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344877 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/df49d2ff-640d-4cf1-812c-8b7275df6292-encryption-config\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344898 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c95a7fea-e442-4f6f-b6df-0b8f62c5a13f-proxy-tls\") pod \"machine-config-controller-84d6567774-w755s\" (UID: \"c95a7fea-e442-4f6f-b6df-0b8f62c5a13f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w755s" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344920 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tx75\" (UniqueName: \"kubernetes.io/projected/df0cea7a-1f29-4af5-b8f1-2e1c8873610c-kube-api-access-7tx75\") pod \"collect-profiles-29332215-bxfll\" (UID: \"df0cea7a-1f29-4af5-b8f1-2e1c8873610c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344944 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3c901fd1-c23c-47f0-b44f-102a1965abd6-image-import-ca\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344966 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c901fd1-c23c-47f0-b44f-102a1965abd6-serving-cert\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.344987 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dnpb\" (UniqueName: \"kubernetes.io/projected/3c901fd1-c23c-47f0-b44f-102a1965abd6-kube-api-access-2dnpb\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345008 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e4ee61-1c1c-41eb-82a7-42b6313d863c-config\") pod \"machine-approver-56656f9798-7fps5\" (UID: \"d1e4ee61-1c1c-41eb-82a7-42b6313d863c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345030 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e63c9d66-5727-4750-a48d-f55d5d6358a1-etcd-client\") pod \"etcd-operator-b45778765-xbrc2\" (UID: \"e63c9d66-5727-4750-a48d-f55d5d6358a1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345050 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df49d2ff-640d-4cf1-812c-8b7275df6292-audit-policies\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345093 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85911902-d86c-48de-b04f-e12b5885a05c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4bx6b\" (UID: \"85911902-d86c-48de-b04f-e12b5885a05c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345118 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3c901fd1-c23c-47f0-b44f-102a1965abd6-etcd-client\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345140 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebee26eb-b32e-459d-b4d2-7a36a325f08b-serving-cert\") pod \"controller-manager-879f6c89f-qqw9b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345165 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d96a6e0b-5e82-48ad-b930-b7b82620830a-metrics-certs\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345196 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e63c9d66-5727-4750-a48d-f55d5d6358a1-serving-cert\") pod \"etcd-operator-b45778765-xbrc2\" (UID: \"e63c9d66-5727-4750-a48d-f55d5d6358a1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345218 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a9627e3-628f-42ee-b6c7-e203097ad785-proxy-tls\") pod \"machine-config-operator-74547568cd-n48bp\" (UID: \"6a9627e3-628f-42ee-b6c7-e203097ad785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345240 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3c901fd1-c23c-47f0-b44f-102a1965abd6-node-pullsecrets\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345263 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2ca17841-28c4-4a05-995c-6324cea0ffbd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rtl4z\" (UID: \"2ca17841-28c4-4a05-995c-6324cea0ffbd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345288 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/809bf602-1dba-4d74-8f71-18add0de807a-tmpfs\") pod \"packageserver-d55dfcdfc-5q99v\" (UID: \"809bf602-1dba-4d74-8f71-18add0de807a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345308 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpdlr\" (UniqueName: \"kubernetes.io/projected/809bf602-1dba-4d74-8f71-18add0de807a-kube-api-access-gpdlr\") pod \"packageserver-d55dfcdfc-5q99v\" (UID: \"809bf602-1dba-4d74-8f71-18add0de807a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345343 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85911902-d86c-48de-b04f-e12b5885a05c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4bx6b\" (UID: \"85911902-d86c-48de-b04f-e12b5885a05c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345377 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3c901fd1-c23c-47f0-b44f-102a1965abd6-etcd-serving-ca\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345402 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p44rn\" (UniqueName: \"kubernetes.io/projected/c95a7fea-e442-4f6f-b6df-0b8f62c5a13f-kube-api-access-p44rn\") pod \"machine-config-controller-84d6567774-w755s\" (UID: \"c95a7fea-e442-4f6f-b6df-0b8f62c5a13f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w755s" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345425 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc5gc\" (UniqueName: \"kubernetes.io/projected/fba88879-d200-4b89-a22c-5b83be19012b-kube-api-access-sc5gc\") pod \"service-ca-9c57cc56f-ddpjz\" (UID: \"fba88879-d200-4b89-a22c-5b83be19012b\") " pod="openshift-service-ca/service-ca-9c57cc56f-ddpjz" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345448 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e63c9d66-5727-4750-a48d-f55d5d6358a1-etcd-ca\") pod \"etcd-operator-b45778765-xbrc2\" (UID: \"e63c9d66-5727-4750-a48d-f55d5d6358a1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345470 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df0cea7a-1f29-4af5-b8f1-2e1c8873610c-secret-volume\") pod \"collect-profiles-29332215-bxfll\" (UID: \"df0cea7a-1f29-4af5-b8f1-2e1c8873610c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.346124 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d1e4ee61-1c1c-41eb-82a7-42b6313d863c-machine-approver-tls\") pod \"machine-approver-56656f9798-7fps5\" (UID: \"d1e4ee61-1c1c-41eb-82a7-42b6313d863c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.345505 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3c901fd1-c23c-47f0-b44f-102a1965abd6-audit\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.346354 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrxzf\" (UniqueName: \"kubernetes.io/projected/d3a80e27-d7fd-4b62-b5ae-9719c4f69655-kube-api-access-qrxzf\") pod \"control-plane-machine-set-operator-78cbb6b69f-hrd59\" (UID: \"d3a80e27-d7fd-4b62-b5ae-9719c4f69655\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hrd59" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.346915 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ca17841-28c4-4a05-995c-6324cea0ffbd-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rtl4z\" (UID: \"2ca17841-28c4-4a05-995c-6324cea0ffbd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.347253 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e63c9d66-5727-4750-a48d-f55d5d6358a1-etcd-service-ca\") pod \"etcd-operator-b45778765-xbrc2\" (UID: \"e63c9d66-5727-4750-a48d-f55d5d6358a1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.347763 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d1e4ee61-1c1c-41eb-82a7-42b6313d863c-auth-proxy-config\") pod \"machine-approver-56656f9798-7fps5\" (UID: \"d1e4ee61-1c1c-41eb-82a7-42b6313d863c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.347861 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebee26eb-b32e-459d-b4d2-7a36a325f08b-config\") pod \"controller-manager-879f6c89f-qqw9b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.348568 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebee26eb-b32e-459d-b4d2-7a36a325f08b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qqw9b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.349937 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c95a7fea-e442-4f6f-b6df-0b8f62c5a13f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-w755s\" (UID: \"c95a7fea-e442-4f6f-b6df-0b8f62c5a13f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w755s" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.350746 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.350999 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebee26eb-b32e-459d-b4d2-7a36a325f08b-client-ca\") pod \"controller-manager-879f6c89f-qqw9b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.351239 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3c901fd1-c23c-47f0-b44f-102a1965abd6-node-pullsecrets\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.351551 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3c901fd1-c23c-47f0-b44f-102a1965abd6-image-import-ca\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.353157 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/809bf602-1dba-4d74-8f71-18add0de807a-tmpfs\") pod \"packageserver-d55dfcdfc-5q99v\" (UID: \"809bf602-1dba-4d74-8f71-18add0de807a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.353618 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94jx9\" (UniqueName: \"kubernetes.io/projected/c6bcc8a7-80c7-4b6b-8b40-db870b0ea3f0-kube-api-access-94jx9\") pod \"cluster-samples-operator-665b6dd947-4k9mv\" (UID: \"c6bcc8a7-80c7-4b6b-8b40-db870b0ea3f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4k9mv" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.354556 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e63c9d66-5727-4750-a48d-f55d5d6358a1-etcd-ca\") pod \"etcd-operator-b45778765-xbrc2\" (UID: \"e63c9d66-5727-4750-a48d-f55d5d6358a1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.355494 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3c901fd1-c23c-47f0-b44f-102a1965abd6-audit\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.358252 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3c901fd1-c23c-47f0-b44f-102a1965abd6-etcd-serving-ca\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.358395 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e4ee61-1c1c-41eb-82a7-42b6313d863c-config\") pod \"machine-approver-56656f9798-7fps5\" (UID: \"d1e4ee61-1c1c-41eb-82a7-42b6313d863c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.359439 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.360570 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/df49d2ff-640d-4cf1-812c-8b7275df6292-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.361046 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b8zl8"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.361188 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a9627e3-628f-42ee-b6c7-e203097ad785-auth-proxy-config\") pod \"machine-config-operator-74547568cd-n48bp\" (UID: \"6a9627e3-628f-42ee-b6c7-e203097ad785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.362658 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e63c9d66-5727-4750-a48d-f55d5d6358a1-config\") pod \"etcd-operator-b45778765-xbrc2\" (UID: \"e63c9d66-5727-4750-a48d-f55d5d6358a1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.362914 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df0cea7a-1f29-4af5-b8f1-2e1c8873610c-config-volume\") pod \"collect-profiles-29332215-bxfll\" (UID: \"df0cea7a-1f29-4af5-b8f1-2e1c8873610c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.363060 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3c901fd1-c23c-47f0-b44f-102a1965abd6-encryption-config\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.363973 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c901fd1-c23c-47f0-b44f-102a1965abd6-config\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.364003 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df49d2ff-640d-4cf1-812c-8b7275df6292-audit-dir\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.364152 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df49d2ff-640d-4cf1-812c-8b7275df6292-audit-policies\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.364407 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4rhxp"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.364846 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/df49d2ff-640d-4cf1-812c-8b7275df6292-etcd-client\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.365214 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3c901fd1-c23c-47f0-b44f-102a1965abd6-etcd-client\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.365238 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebee26eb-b32e-459d-b4d2-7a36a325f08b-serving-cert\") pod \"controller-manager-879f6c89f-qqw9b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.365367 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2ca17841-28c4-4a05-995c-6324cea0ffbd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rtl4z\" (UID: \"2ca17841-28c4-4a05-995c-6324cea0ffbd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.365592 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/df49d2ff-640d-4cf1-812c-8b7275df6292-encryption-config\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.367263 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df49d2ff-640d-4cf1-812c-8b7275df6292-serving-cert\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.368759 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-px79m"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.369349 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c901fd1-c23c-47f0-b44f-102a1965abd6-serving-cert\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.369763 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-px79m" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.369792 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.370367 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.370364 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e63c9d66-5727-4750-a48d-f55d5d6358a1-serving-cert\") pod \"etcd-operator-b45778765-xbrc2\" (UID: \"e63c9d66-5727-4750-a48d-f55d5d6358a1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.373347 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ljq2l"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.375396 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9526z"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.378731 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x6fbb"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.378761 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xbrc2"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.380393 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.381747 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qr5w8"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.383769 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.383804 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-pxc8x"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.384385 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pxc8x" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.388155 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nzqkt"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.388187 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-px79m"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.388199 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wmls6"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.389094 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.390771 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e63c9d66-5727-4750-a48d-f55d5d6358a1-etcd-client\") pod \"etcd-operator-b45778765-xbrc2\" (UID: \"e63c9d66-5727-4750-a48d-f55d5d6358a1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.394203 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.398261 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9k2sh"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.398306 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.398323 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hrd59"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.404789 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-khxjg"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.404829 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xxrs"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.412461 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df0cea7a-1f29-4af5-b8f1-2e1c8873610c-secret-volume\") pod \"collect-profiles-29332215-bxfll\" (UID: \"df0cea7a-1f29-4af5-b8f1-2e1c8873610c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.412519 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-97pqq"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.413197 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.415703 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wmls6"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.422995 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.423026 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.423985 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pxc8x"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.426268 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hbk6h"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.427680 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-w4d7d"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.429415 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.430062 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-w4d7d" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.442039 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c95a7fea-e442-4f6f-b6df-0b8f62c5a13f-proxy-tls\") pod \"machine-config-controller-84d6567774-w755s\" (UID: \"c95a7fea-e442-4f6f-b6df-0b8f62c5a13f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w755s" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.447767 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt6z2\" (UniqueName: \"kubernetes.io/projected/c00e7c1b-2054-427e-ad58-03b97f0ceb83-kube-api-access-rt6z2\") pod \"dns-operator-744455d44c-97pqq\" (UID: \"c00e7c1b-2054-427e-ad58-03b97f0ceb83\") " pod="openshift-dns-operator/dns-operator-744455d44c-97pqq" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.447824 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d96a6e0b-5e82-48ad-b930-b7b82620830a-stats-auth\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.448095 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fmmk\" (UniqueName: \"kubernetes.io/projected/d96a6e0b-5e82-48ad-b930-b7b82620830a-kube-api-access-7fmmk\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.448122 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d96a6e0b-5e82-48ad-b930-b7b82620830a-service-ca-bundle\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.448175 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/72ec1d6f-5923-45ed-a5a8-ad8a268faca5-profile-collector-cert\") pod \"catalog-operator-68c6474976-h9k57\" (UID: \"72ec1d6f-5923-45ed-a5a8-ad8a268faca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.448193 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q4fw\" (UniqueName: \"kubernetes.io/projected/72ec1d6f-5923-45ed-a5a8-ad8a268faca5-kube-api-access-4q4fw\") pod \"catalog-operator-68c6474976-h9k57\" (UID: \"72ec1d6f-5923-45ed-a5a8-ad8a268faca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.448246 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85911902-d86c-48de-b04f-e12b5885a05c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4bx6b\" (UID: \"85911902-d86c-48de-b04f-e12b5885a05c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.448264 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d96a6e0b-5e82-48ad-b930-b7b82620830a-metrics-certs\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.448303 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85911902-d86c-48de-b04f-e12b5885a05c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4bx6b\" (UID: \"85911902-d86c-48de-b04f-e12b5885a05c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.448330 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc5gc\" (UniqueName: \"kubernetes.io/projected/fba88879-d200-4b89-a22c-5b83be19012b-kube-api-access-sc5gc\") pod \"service-ca-9c57cc56f-ddpjz\" (UID: \"fba88879-d200-4b89-a22c-5b83be19012b\") " pod="openshift-service-ca/service-ca-9c57cc56f-ddpjz" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.448359 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrxzf\" (UniqueName: \"kubernetes.io/projected/d3a80e27-d7fd-4b62-b5ae-9719c4f69655-kube-api-access-qrxzf\") pod \"control-plane-machine-set-operator-78cbb6b69f-hrd59\" (UID: \"d3a80e27-d7fd-4b62-b5ae-9719c4f69655\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hrd59" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.448397 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85911902-d86c-48de-b04f-e12b5885a05c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4bx6b\" (UID: \"85911902-d86c-48de-b04f-e12b5885a05c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.448422 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3a80e27-d7fd-4b62-b5ae-9719c4f69655-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hrd59\" (UID: \"d3a80e27-d7fd-4b62-b5ae-9719c4f69655\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hrd59" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.448440 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fba88879-d200-4b89-a22c-5b83be19012b-signing-cabundle\") pod \"service-ca-9c57cc56f-ddpjz\" (UID: \"fba88879-d200-4b89-a22c-5b83be19012b\") " pod="openshift-service-ca/service-ca-9c57cc56f-ddpjz" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.448455 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d96a6e0b-5e82-48ad-b930-b7b82620830a-default-certificate\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.448481 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/72ec1d6f-5923-45ed-a5a8-ad8a268faca5-srv-cert\") pod \"catalog-operator-68c6474976-h9k57\" (UID: \"72ec1d6f-5923-45ed-a5a8-ad8a268faca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.449239 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.456550 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c00e7c1b-2054-427e-ad58-03b97f0ceb83-metrics-tls\") pod \"dns-operator-744455d44c-97pqq\" (UID: \"c00e7c1b-2054-427e-ad58-03b97f0ceb83\") " pod="openshift-dns-operator/dns-operator-744455d44c-97pqq" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.456632 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fba88879-d200-4b89-a22c-5b83be19012b-signing-key\") pod \"service-ca-9c57cc56f-ddpjz\" (UID: \"fba88879-d200-4b89-a22c-5b83be19012b\") " pod="openshift-service-ca/service-ca-9c57cc56f-ddpjz" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.460151 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/72ec1d6f-5923-45ed-a5a8-ad8a268faca5-profile-collector-cert\") pod \"catalog-operator-68c6474976-h9k57\" (UID: \"72ec1d6f-5923-45ed-a5a8-ad8a268faca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.492053 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9lhd\" (UniqueName: \"kubernetes.io/projected/60fc8c95-75b8-4032-b407-c0b21022da37-kube-api-access-s9lhd\") pod \"downloads-7954f5f757-kl87s\" (UID: \"60fc8c95-75b8-4032-b407-c0b21022da37\") " pod="openshift-console/downloads-7954f5f757-kl87s" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.512412 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nswfx\" (UniqueName: \"kubernetes.io/projected/100b758b-a285-49a0-a5ec-0b565dce5e1a-kube-api-access-nswfx\") pod \"machine-api-operator-5694c8668f-bbffn\" (UID: \"100b758b-a285-49a0-a5ec-0b565dce5e1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.520871 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.530702 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7dkf\" (UniqueName: \"kubernetes.io/projected/42ae4a09-81ca-465e-85e7-38a09497805c-kube-api-access-h7dkf\") pod \"openshift-config-operator-7777fb866f-ddrvp\" (UID: \"42ae4a09-81ca-465e-85e7-38a09497805c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.549065 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-kl87s" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.549589 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cxb9h" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.550231 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.562556 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4k9mv" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.564537 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/809bf602-1dba-4d74-8f71-18add0de807a-webhook-cert\") pod \"packageserver-d55dfcdfc-5q99v\" (UID: \"809bf602-1dba-4d74-8f71-18add0de807a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.571707 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.574544 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/809bf602-1dba-4d74-8f71-18add0de807a-apiservice-cert\") pod \"packageserver-d55dfcdfc-5q99v\" (UID: \"809bf602-1dba-4d74-8f71-18add0de807a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.590226 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.597672 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6a9627e3-628f-42ee-b6c7-e203097ad785-images\") pod \"machine-config-operator-74547568cd-n48bp\" (UID: \"6a9627e3-628f-42ee-b6c7-e203097ad785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.610781 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.629190 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.638366 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a9627e3-628f-42ee-b6c7-e203097ad785-proxy-tls\") pod \"machine-config-operator-74547568cd-n48bp\" (UID: \"6a9627e3-628f-42ee-b6c7-e203097ad785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.654041 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.675493 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.689450 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.710552 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.731299 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.749951 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.769828 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.769883 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-d66t2"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.771470 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.775671 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gkmv8"] Oct 08 14:25:15 crc kubenswrapper[4624]: W1008 14:25:15.787123 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f1c0e12_944e_4afe_a75d_fb0f25d3ac29.slice/crio-f219bdaf9c2d721ec64e4377bd0b87f48a3d3f02624b5b2f1a22f91272e31cd1 WatchSource:0}: Error finding container f219bdaf9c2d721ec64e4377bd0b87f48a3d3f02624b5b2f1a22f91272e31cd1: Status 404 returned error can't find the container with id f219bdaf9c2d721ec64e4377bd0b87f48a3d3f02624b5b2f1a22f91272e31cd1 Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.790352 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.808699 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cxb9h"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.810084 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.816281 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rgn2q"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.818081 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9526z"] Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.829087 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.832249 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: W1008 14:25:15.833301 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb07e9fd_b48d_4a87_8be6_01b98e889cff.slice/crio-f296a932edba41efbc3547550a4709908fe8f837cda9111a1f04cb1c7fd6aa97 WatchSource:0}: Error finding container f296a932edba41efbc3547550a4709908fe8f837cda9111a1f04cb1c7fd6aa97: Status 404 returned error can't find the container with id f296a932edba41efbc3547550a4709908fe8f837cda9111a1f04cb1c7fd6aa97 Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.852467 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4k9mv"] Oct 08 14:25:15 crc kubenswrapper[4624]: W1008 14:25:15.856962 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38e66eda_8d84_40e4_9749_0f7759ddbd0d.slice/crio-77e4b360f558604b7657a59fa602effe4236341240cfb60705086819c0fd5b55 WatchSource:0}: Error finding container 77e4b360f558604b7657a59fa602effe4236341240cfb60705086819c0fd5b55: Status 404 returned error can't find the container with id 77e4b360f558604b7657a59fa602effe4236341240cfb60705086819c0fd5b55 Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.858410 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.870818 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.890176 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.910679 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.930767 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.949512 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.969861 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 08 14:25:15 crc kubenswrapper[4624]: I1008 14:25:15.989098 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.010181 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.027073 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c00e7c1b-2054-427e-ad58-03b97f0ceb83-metrics-tls\") pod \"dns-operator-744455d44c-97pqq\" (UID: \"c00e7c1b-2054-427e-ad58-03b97f0ceb83\") " pod="openshift-dns-operator/dns-operator-744455d44c-97pqq" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.030388 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.049177 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.054939 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-kl87s"] Oct 08 14:25:16 crc kubenswrapper[4624]: W1008 14:25:16.068089 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60fc8c95_75b8_4032_b407_c0b21022da37.slice/crio-5fbf5424b11c374fc8cc872aafafded5f7af124c78e9cc657d5eeaebfdf5e906 WatchSource:0}: Error finding container 5fbf5424b11c374fc8cc872aafafded5f7af124c78e9cc657d5eeaebfdf5e906: Status 404 returned error can't find the container with id 5fbf5424b11c374fc8cc872aafafded5f7af124c78e9cc657d5eeaebfdf5e906 Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.070421 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.089320 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.104899 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" event={"ID":"48945eb6-75ee-4ed0-bc04-f83b5888d85d","Type":"ContainerStarted","Data":"9403945eeb7b04834b446ecda7445d7327f2bcd88db65e49a6cc361100532351"} Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.105742 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cxb9h" event={"ID":"bb07e9fd-b48d-4a87-8be6-01b98e889cff","Type":"ContainerStarted","Data":"f296a932edba41efbc3547550a4709908fe8f837cda9111a1f04cb1c7fd6aa97"} Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.106277 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9526z" event={"ID":"38e66eda-8d84-40e4-9749-0f7759ddbd0d","Type":"ContainerStarted","Data":"77e4b360f558604b7657a59fa602effe4236341240cfb60705086819c0fd5b55"} Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.107105 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rgn2q" event={"ID":"82676f42-aabb-4cee-b836-790a48dd9a2e","Type":"ContainerStarted","Data":"866dc6618eb46a6a1619518e41b7fc3c7779e8fde3fbdfb9e62e4f52fa35d82e"} Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.107588 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-kl87s" event={"ID":"60fc8c95-75b8-4032-b407-c0b21022da37","Type":"ContainerStarted","Data":"5fbf5424b11c374fc8cc872aafafded5f7af124c78e9cc657d5eeaebfdf5e906"} Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.108334 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" event={"ID":"0f1c0e12-944e-4afe-a75d-fb0f25d3ac29","Type":"ContainerStarted","Data":"f219bdaf9c2d721ec64e4377bd0b87f48a3d3f02624b5b2f1a22f91272e31cd1"} Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.109024 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gkmv8" event={"ID":"9f8eefe6-4147-465d-adda-f0ddf9530abb","Type":"ContainerStarted","Data":"21d7153212c58c89dd5dc4aea5cb7bbd51970691dd85eb3be58f70511eda0fb0"} Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.109527 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 08 14:25:16 crc kubenswrapper[4624]: E1008 14:25:16.131376 4624 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Oct 08 14:25:16 crc kubenswrapper[4624]: E1008 14:25:16.131450 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/100b758b-a285-49a0-a5ec-0b565dce5e1a-config podName:100b758b-a285-49a0-a5ec-0b565dce5e1a nodeName:}" failed. No retries permitted until 2025-10-08 14:25:16.631428431 +0000 UTC m=+141.782363508 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/100b758b-a285-49a0-a5ec-0b565dce5e1a-config") pod "machine-api-operator-5694c8668f-bbffn" (UID: "100b758b-a285-49a0-a5ec-0b565dce5e1a") : failed to sync configmap cache: timed out waiting for the condition Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.138847 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp"] Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.149001 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.156206 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vggrv\" (UniqueName: \"kubernetes.io/projected/03313d2c-4c05-4a55-a595-cf633b935c29-kube-api-access-vggrv\") pod \"oauth-openshift-558db77b4-b8zl8\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.171518 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.189846 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.209545 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.230163 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.248148 4624 request.go:700] Waited for 1.00149678s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.250535 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.269921 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.271939 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.292559 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.309594 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.330459 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.348490 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.355254 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fba88879-d200-4b89-a22c-5b83be19012b-signing-key\") pod \"service-ca-9c57cc56f-ddpjz\" (UID: \"fba88879-d200-4b89-a22c-5b83be19012b\") " pod="openshift-service-ca/service-ca-9c57cc56f-ddpjz" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.373061 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.419572 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.421940 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.431694 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.432961 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fba88879-d200-4b89-a22c-5b83be19012b-signing-cabundle\") pod \"service-ca-9c57cc56f-ddpjz\" (UID: \"fba88879-d200-4b89-a22c-5b83be19012b\") " pod="openshift-service-ca/service-ca-9c57cc56f-ddpjz" Oct 08 14:25:16 crc kubenswrapper[4624]: E1008 14:25:16.449919 4624 secret.go:188] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Oct 08 14:25:16 crc kubenswrapper[4624]: E1008 14:25:16.450007 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d96a6e0b-5e82-48ad-b930-b7b82620830a-metrics-certs podName:d96a6e0b-5e82-48ad-b930-b7b82620830a nodeName:}" failed. No retries permitted until 2025-10-08 14:25:16.949985981 +0000 UTC m=+142.100921058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d96a6e0b-5e82-48ad-b930-b7b82620830a-metrics-certs") pod "router-default-5444994796-fbw8l" (UID: "d96a6e0b-5e82-48ad-b930-b7b82620830a") : failed to sync secret cache: timed out waiting for the condition Oct 08 14:25:16 crc kubenswrapper[4624]: E1008 14:25:16.450054 4624 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Oct 08 14:25:16 crc kubenswrapper[4624]: E1008 14:25:16.450127 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3a80e27-d7fd-4b62-b5ae-9719c4f69655-control-plane-machine-set-operator-tls podName:d3a80e27-d7fd-4b62-b5ae-9719c4f69655 nodeName:}" failed. No retries permitted until 2025-10-08 14:25:16.950110044 +0000 UTC m=+142.101045121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/d3a80e27-d7fd-4b62-b5ae-9719c4f69655-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-hrd59" (UID: "d3a80e27-d7fd-4b62-b5ae-9719c4f69655") : failed to sync secret cache: timed out waiting for the condition Oct 08 14:25:16 crc kubenswrapper[4624]: E1008 14:25:16.450131 4624 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Oct 08 14:25:16 crc kubenswrapper[4624]: E1008 14:25:16.450155 4624 secret.go:188] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Oct 08 14:25:16 crc kubenswrapper[4624]: E1008 14:25:16.450176 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85911902-d86c-48de-b04f-e12b5885a05c-serving-cert podName:85911902-d86c-48de-b04f-e12b5885a05c nodeName:}" failed. No retries permitted until 2025-10-08 14:25:16.950170666 +0000 UTC m=+142.101105743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85911902-d86c-48de-b04f-e12b5885a05c-serving-cert") pod "openshift-kube-scheduler-operator-5fdd9b5758-4bx6b" (UID: "85911902-d86c-48de-b04f-e12b5885a05c") : failed to sync secret cache: timed out waiting for the condition Oct 08 14:25:16 crc kubenswrapper[4624]: E1008 14:25:16.450190 4624 secret.go:188] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Oct 08 14:25:16 crc kubenswrapper[4624]: E1008 14:25:16.450208 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d96a6e0b-5e82-48ad-b930-b7b82620830a-stats-auth podName:d96a6e0b-5e82-48ad-b930-b7b82620830a nodeName:}" failed. No retries permitted until 2025-10-08 14:25:16.950203647 +0000 UTC m=+142.101138724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/d96a6e0b-5e82-48ad-b930-b7b82620830a-stats-auth") pod "router-default-5444994796-fbw8l" (UID: "d96a6e0b-5e82-48ad-b930-b7b82620830a") : failed to sync secret cache: timed out waiting for the condition Oct 08 14:25:16 crc kubenswrapper[4624]: E1008 14:25:16.450220 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d96a6e0b-5e82-48ad-b930-b7b82620830a-service-ca-bundle podName:d96a6e0b-5e82-48ad-b930-b7b82620830a nodeName:}" failed. No retries permitted until 2025-10-08 14:25:16.950214697 +0000 UTC m=+142.101149774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/d96a6e0b-5e82-48ad-b930-b7b82620830a-service-ca-bundle") pod "router-default-5444994796-fbw8l" (UID: "d96a6e0b-5e82-48ad-b930-b7b82620830a") : failed to sync configmap cache: timed out waiting for the condition Oct 08 14:25:16 crc kubenswrapper[4624]: E1008 14:25:16.449927 4624 secret.go:188] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Oct 08 14:25:16 crc kubenswrapper[4624]: E1008 14:25:16.450264 4624 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Oct 08 14:25:16 crc kubenswrapper[4624]: E1008 14:25:16.450317 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72ec1d6f-5923-45ed-a5a8-ad8a268faca5-srv-cert podName:72ec1d6f-5923-45ed-a5a8-ad8a268faca5 nodeName:}" failed. No retries permitted until 2025-10-08 14:25:16.950292519 +0000 UTC m=+142.101227686 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/72ec1d6f-5923-45ed-a5a8-ad8a268faca5-srv-cert") pod "catalog-operator-68c6474976-h9k57" (UID: "72ec1d6f-5923-45ed-a5a8-ad8a268faca5") : failed to sync secret cache: timed out waiting for the condition Oct 08 14:25:16 crc kubenswrapper[4624]: E1008 14:25:16.450335 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d96a6e0b-5e82-48ad-b930-b7b82620830a-default-certificate podName:d96a6e0b-5e82-48ad-b930-b7b82620830a nodeName:}" failed. No retries permitted until 2025-10-08 14:25:16.95032654 +0000 UTC m=+142.101261717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/d96a6e0b-5e82-48ad-b930-b7b82620830a-default-certificate") pod "router-default-5444994796-fbw8l" (UID: "d96a6e0b-5e82-48ad-b930-b7b82620830a") : failed to sync secret cache: timed out waiting for the condition Oct 08 14:25:16 crc kubenswrapper[4624]: E1008 14:25:16.450396 4624 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.450793 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 08 14:25:16 crc kubenswrapper[4624]: E1008 14:25:16.451473 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/85911902-d86c-48de-b04f-e12b5885a05c-config podName:85911902-d86c-48de-b04f-e12b5885a05c nodeName:}" failed. No retries permitted until 2025-10-08 14:25:16.951457611 +0000 UTC m=+142.102392688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/85911902-d86c-48de-b04f-e12b5885a05c-config") pod "openshift-kube-scheduler-operator-5fdd9b5758-4bx6b" (UID: "85911902-d86c-48de-b04f-e12b5885a05c") : failed to sync configmap cache: timed out waiting for the condition Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.473457 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.489505 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.511484 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.529876 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.549085 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.553544 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b8zl8"] Oct 08 14:25:16 crc kubenswrapper[4624]: W1008 14:25:16.561548 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03313d2c_4c05_4a55_a595_cf633b935c29.slice/crio-6671e382f0c4638250ee59f7118f3be4da783463a2b94fd589aad111ccad249b WatchSource:0}: Error finding container 6671e382f0c4638250ee59f7118f3be4da783463a2b94fd589aad111ccad249b: Status 404 returned error can't find the container with id 6671e382f0c4638250ee59f7118f3be4da783463a2b94fd589aad111ccad249b Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.569188 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.590040 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.609307 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.630244 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.652119 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.670181 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.689944 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.716441 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.726128 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/100b758b-a285-49a0-a5ec-0b565dce5e1a-config\") pod \"machine-api-operator-5694c8668f-bbffn\" (UID: \"100b758b-a285-49a0-a5ec-0b565dce5e1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.728980 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.750058 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.770370 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.790390 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.809745 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.829279 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.850836 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.889116 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.934134 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k6tk\" (UniqueName: \"kubernetes.io/projected/ebee26eb-b32e-459d-b4d2-7a36a325f08b-kube-api-access-5k6tk\") pod \"controller-manager-879f6c89f-qqw9b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.949081 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m48r\" (UniqueName: \"kubernetes.io/projected/2ca17841-28c4-4a05-995c-6324cea0ffbd-kube-api-access-6m48r\") pod \"cluster-image-registry-operator-dc59b4c8b-rtl4z\" (UID: \"2ca17841-28c4-4a05-995c-6324cea0ffbd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z" Oct 08 14:25:16 crc kubenswrapper[4624]: I1008 14:25:16.985015 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4qh4\" (UniqueName: \"kubernetes.io/projected/df49d2ff-640d-4cf1-812c-8b7275df6292-kube-api-access-g4qh4\") pod \"apiserver-7bbb656c7d-5dd84\" (UID: \"df49d2ff-640d-4cf1-812c-8b7275df6292\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.007490 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p44rn\" (UniqueName: \"kubernetes.io/projected/c95a7fea-e442-4f6f-b6df-0b8f62c5a13f-kube-api-access-p44rn\") pod \"machine-config-controller-84d6567774-w755s\" (UID: \"c95a7fea-e442-4f6f-b6df-0b8f62c5a13f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w755s" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.024180 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ca17841-28c4-4a05-995c-6324cea0ffbd-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rtl4z\" (UID: \"2ca17841-28c4-4a05-995c-6324cea0ffbd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.029620 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85911902-d86c-48de-b04f-e12b5885a05c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4bx6b\" (UID: \"85911902-d86c-48de-b04f-e12b5885a05c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.029738 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3a80e27-d7fd-4b62-b5ae-9719c4f69655-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hrd59\" (UID: \"d3a80e27-d7fd-4b62-b5ae-9719c4f69655\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hrd59" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.029767 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d96a6e0b-5e82-48ad-b930-b7b82620830a-default-certificate\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.029791 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/72ec1d6f-5923-45ed-a5a8-ad8a268faca5-srv-cert\") pod \"catalog-operator-68c6474976-h9k57\" (UID: \"72ec1d6f-5923-45ed-a5a8-ad8a268faca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.029837 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d96a6e0b-5e82-48ad-b930-b7b82620830a-stats-auth\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.029864 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d96a6e0b-5e82-48ad-b930-b7b82620830a-service-ca-bundle\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.029971 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85911902-d86c-48de-b04f-e12b5885a05c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4bx6b\" (UID: \"85911902-d86c-48de-b04f-e12b5885a05c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.029994 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d96a6e0b-5e82-48ad-b930-b7b82620830a-metrics-certs\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.031538 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d96a6e0b-5e82-48ad-b930-b7b82620830a-service-ca-bundle\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.033223 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85911902-d86c-48de-b04f-e12b5885a05c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4bx6b\" (UID: \"85911902-d86c-48de-b04f-e12b5885a05c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.033266 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d96a6e0b-5e82-48ad-b930-b7b82620830a-metrics-certs\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.034122 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3a80e27-d7fd-4b62-b5ae-9719c4f69655-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hrd59\" (UID: \"d3a80e27-d7fd-4b62-b5ae-9719c4f69655\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hrd59" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.035010 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d96a6e0b-5e82-48ad-b930-b7b82620830a-stats-auth\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.035833 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d96a6e0b-5e82-48ad-b930-b7b82620830a-default-certificate\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.036183 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/72ec1d6f-5923-45ed-a5a8-ad8a268faca5-srv-cert\") pod \"catalog-operator-68c6474976-h9k57\" (UID: \"72ec1d6f-5923-45ed-a5a8-ad8a268faca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.044614 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpdlr\" (UniqueName: \"kubernetes.io/projected/809bf602-1dba-4d74-8f71-18add0de807a-kube-api-access-gpdlr\") pod \"packageserver-d55dfcdfc-5q99v\" (UID: \"809bf602-1dba-4d74-8f71-18add0de807a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.065468 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dnpb\" (UniqueName: \"kubernetes.io/projected/3c901fd1-c23c-47f0-b44f-102a1965abd6-kube-api-access-2dnpb\") pod \"apiserver-76f77b778f-ljq2l\" (UID: \"3c901fd1-c23c-47f0-b44f-102a1965abd6\") " pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.068116 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.077036 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.082219 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc2pc\" (UniqueName: \"kubernetes.io/projected/d1e4ee61-1c1c-41eb-82a7-42b6313d863c-kube-api-access-hc2pc\") pod \"machine-approver-56656f9798-7fps5\" (UID: \"d1e4ee61-1c1c-41eb-82a7-42b6313d863c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.082621 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85911902-d86c-48de-b04f-e12b5885a05c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4bx6b\" (UID: \"85911902-d86c-48de-b04f-e12b5885a05c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.089472 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zcfr\" (UniqueName: \"kubernetes.io/projected/e63c9d66-5727-4750-a48d-f55d5d6358a1-kube-api-access-9zcfr\") pod \"etcd-operator-b45778765-xbrc2\" (UID: \"e63c9d66-5727-4750-a48d-f55d5d6358a1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.090957 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.100307 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w755s" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.104817 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvhw2\" (UniqueName: \"kubernetes.io/projected/6a9627e3-628f-42ee-b6c7-e203097ad785-kube-api-access-tvhw2\") pod \"machine-config-operator-74547568cd-n48bp\" (UID: \"6a9627e3-628f-42ee-b6c7-e203097ad785\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.120967 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.123179 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4k9mv" event={"ID":"c6bcc8a7-80c7-4b6b-8b40-db870b0ea3f0","Type":"ContainerStarted","Data":"17852b0b1526e0fa54bfcf78b1b677e120d1ccda59d34c3c5284e17063d1c99a"} Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.123230 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4k9mv" event={"ID":"c6bcc8a7-80c7-4b6b-8b40-db870b0ea3f0","Type":"ContainerStarted","Data":"d7bcdb12727d9d76880c2b04d5f788ceb86941b710ba893f093f19128f8ac4d4"} Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.123250 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4k9mv" event={"ID":"c6bcc8a7-80c7-4b6b-8b40-db870b0ea3f0","Type":"ContainerStarted","Data":"dccb9bcdb14798e98b2a8462d16c0510c663ad60058471ecd8eac092489cfac5"} Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.127277 4624 generic.go:334] "Generic (PLEG): container finished" podID="42ae4a09-81ca-465e-85e7-38a09497805c" containerID="b2368a570cbbb2a323def155a7c694762d35bac6a378bb7d16670f4a755d17d2" exitCode=0 Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.127370 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" event={"ID":"42ae4a09-81ca-465e-85e7-38a09497805c","Type":"ContainerDied","Data":"b2368a570cbbb2a323def155a7c694762d35bac6a378bb7d16670f4a755d17d2"} Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.127402 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" event={"ID":"42ae4a09-81ca-465e-85e7-38a09497805c","Type":"ContainerStarted","Data":"9aea102c3b9553db47c26e5cc2e14a520dbe3fe0b3985c1ca2e4d118fb1a9b57"} Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.129764 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.130042 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rgn2q" event={"ID":"82676f42-aabb-4cee-b836-790a48dd9a2e","Type":"ContainerStarted","Data":"7058fb5813ed3fa07b017c20a830d69435fedbab1079e27e62d8e7b61f6f38bc"} Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.134511 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.142962 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tx75\" (UniqueName: \"kubernetes.io/projected/df0cea7a-1f29-4af5-b8f1-2e1c8873610c-kube-api-access-7tx75\") pod \"collect-profiles-29332215-bxfll\" (UID: \"df0cea7a-1f29-4af5-b8f1-2e1c8873610c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.146933 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" event={"ID":"48945eb6-75ee-4ed0-bc04-f83b5888d85d","Type":"ContainerStarted","Data":"331a5768310336a3e86ae1c6e88b0d1d6a3f5379f6fcb6c7b2c6b869a5bcde76"} Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.147891 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.150103 4624 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-gnrmc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.150149 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" podUID="48945eb6-75ee-4ed0-bc04-f83b5888d85d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.150746 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.152969 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-kl87s" event={"ID":"60fc8c95-75b8-4032-b407-c0b21022da37","Type":"ContainerStarted","Data":"7e3f13fdf273d65ef7cd399bfd88b1c97f51bac5df1166b1194e7078b7d7771d"} Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.154224 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-kl87s" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.156532 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-kl87s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.156582 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kl87s" podUID="60fc8c95-75b8-4032-b407-c0b21022da37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.162115 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" event={"ID":"0f1c0e12-944e-4afe-a75d-fb0f25d3ac29","Type":"ContainerStarted","Data":"fdb265b23f37188b52037218a11cb4e9434457b7d39af3e5a1dd2486e0a47468"} Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.164995 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gkmv8" event={"ID":"9f8eefe6-4147-465d-adda-f0ddf9530abb","Type":"ContainerStarted","Data":"5b7765e58a1e56051903802a525974b37d0bedd3aece37e6ae5147a009e59894"} Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.165601 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-gkmv8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.169996 4624 patch_prober.go:28] interesting pod/console-operator-58897d9998-gkmv8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.170051 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gkmv8" podUID="9f8eefe6-4147-465d-adda-f0ddf9530abb" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.170192 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" event={"ID":"03313d2c-4c05-4a55-a595-cf633b935c29","Type":"ContainerStarted","Data":"6671e382f0c4638250ee59f7118f3be4da783463a2b94fd589aad111ccad249b"} Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.173119 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.195091 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.203848 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cxb9h" event={"ID":"bb07e9fd-b48d-4a87-8be6-01b98e889cff","Type":"ContainerStarted","Data":"6aa22dad3c429f0e62bf7696846ae75859e46178e50321c5751386acea6603c7"} Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.209903 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.211101 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9526z" event={"ID":"38e66eda-8d84-40e4-9749-0f7759ddbd0d","Type":"ContainerStarted","Data":"206020fb5a504a7e0cbedcde4718bf0ebf25a9e96edaccef9d714652585a9457"} Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.230145 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.251036 4624 request.go:700] Waited for 1.866290619s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.255090 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.269140 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.296676 4624 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.300934 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.312158 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.334759 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.341894 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.350299 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.350390 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.360729 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qqw9b"] Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.370026 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.411046 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt6z2\" (UniqueName: \"kubernetes.io/projected/c00e7c1b-2054-427e-ad58-03b97f0ceb83-kube-api-access-rt6z2\") pod \"dns-operator-744455d44c-97pqq\" (UID: \"c00e7c1b-2054-427e-ad58-03b97f0ceb83\") " pod="openshift-dns-operator/dns-operator-744455d44c-97pqq" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.411213 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.424425 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fmmk\" (UniqueName: \"kubernetes.io/projected/d96a6e0b-5e82-48ad-b930-b7b82620830a-kube-api-access-7fmmk\") pod \"router-default-5444994796-fbw8l\" (UID: \"d96a6e0b-5e82-48ad-b930-b7b82620830a\") " pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.450093 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc5gc\" (UniqueName: \"kubernetes.io/projected/fba88879-d200-4b89-a22c-5b83be19012b-kube-api-access-sc5gc\") pod \"service-ca-9c57cc56f-ddpjz\" (UID: \"fba88879-d200-4b89-a22c-5b83be19012b\") " pod="openshift-service-ca/service-ca-9c57cc56f-ddpjz" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.469373 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrxzf\" (UniqueName: \"kubernetes.io/projected/d3a80e27-d7fd-4b62-b5ae-9719c4f69655-kube-api-access-qrxzf\") pod \"control-plane-machine-set-operator-78cbb6b69f-hrd59\" (UID: \"d3a80e27-d7fd-4b62-b5ae-9719c4f69655\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hrd59" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.485388 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85911902-d86c-48de-b04f-e12b5885a05c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4bx6b\" (UID: \"85911902-d86c-48de-b04f-e12b5885a05c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.503078 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q4fw\" (UniqueName: \"kubernetes.io/projected/72ec1d6f-5923-45ed-a5a8-ad8a268faca5-kube-api-access-4q4fw\") pod \"catalog-operator-68c6474976-h9k57\" (UID: \"72ec1d6f-5923-45ed-a5a8-ad8a268faca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.529112 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.532810 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-97pqq" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.537375 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/100b758b-a285-49a0-a5ec-0b565dce5e1a-config\") pod \"machine-api-operator-5694c8668f-bbffn\" (UID: \"100b758b-a285-49a0-a5ec-0b565dce5e1a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.594093 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ljq2l"] Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.634706 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z"] Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.635254 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-w755s"] Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.649443 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xbrc2"] Oct 08 14:25:17 crc kubenswrapper[4624]: W1008 14:25:17.651486 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ca17841_28c4_4a05_995c_6324cea0ffbd.slice/crio-fef07533865c10fd7b9584565a406a5e92ac0f5714d518ec4a6ad378f7f83c42 WatchSource:0}: Error finding container fef07533865c10fd7b9584565a406a5e92ac0f5714d518ec4a6ad378f7f83c42: Status 404 returned error can't find the container with id fef07533865c10fd7b9584565a406a5e92ac0f5714d518ec4a6ad378f7f83c42 Oct 08 14:25:17 crc kubenswrapper[4624]: W1008 14:25:17.652575 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc95a7fea_e442_4f6f_b6df_0b8f62c5a13f.slice/crio-b6cafb4cfcc6bf9d5551e9cc9197925551471eb834edc2bdbac20e43b1395333 WatchSource:0}: Error finding container b6cafb4cfcc6bf9d5551e9cc9197925551471eb834edc2bdbac20e43b1395333: Status 404 returned error can't find the container with id b6cafb4cfcc6bf9d5551e9cc9197925551471eb834edc2bdbac20e43b1395333 Oct 08 14:25:17 crc kubenswrapper[4624]: W1008 14:25:17.661594 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode63c9d66_5727_4750_a48d_f55d5d6358a1.slice/crio-546879dcf9c2b60a4f4f2f3afb428ddadf63244359830d2a941a53f1e21ef3c7 WatchSource:0}: Error finding container 546879dcf9c2b60a4f4f2f3afb428ddadf63244359830d2a941a53f1e21ef3c7: Status 404 returned error can't find the container with id 546879dcf9c2b60a4f4f2f3afb428ddadf63244359830d2a941a53f1e21ef3c7 Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.707107 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.707521 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-ddpjz" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.707830 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hrd59" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.708110 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.709686 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77416f3b-337a-43d7-9c77-2b2e86d59d45-trusted-ca\") pod \"ingress-operator-5b745b69d9-kdcqx\" (UID: \"77416f3b-337a-43d7-9c77-2b2e86d59d45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.709723 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c439ba4c-3583-4d35-b586-c97f525345a6-registry-certificates\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.709756 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.709780 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c60f1300-c5d8-4993-8332-e2ca6008c358-config\") pod \"kube-apiserver-operator-766d6c64bb-4jmhk\" (UID: \"c60f1300-c5d8-4993-8332-e2ca6008c358\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4jmhk" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.709812 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eec41fc-17f3-4145-8eb5-77c45199ceaa-config\") pod \"kube-controller-manager-operator-78b949d7b-hbk6h\" (UID: \"9eec41fc-17f3-4145-8eb5-77c45199ceaa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hbk6h" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.709825 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1419c0b-876f-4c86-92ef-ce9d66f8849c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9k2sh\" (UID: \"a1419c0b-876f-4c86-92ef-ce9d66f8849c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9k2sh" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.709859 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bsmm\" (UniqueName: \"kubernetes.io/projected/09662517-7f1a-428a-ae36-ba154bede835-kube-api-access-7bsmm\") pod \"multus-admission-controller-857f4d67dd-khxjg\" (UID: \"09662517-7f1a-428a-ae36-ba154bede835\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-khxjg" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.709882 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zgz8\" (UniqueName: \"kubernetes.io/projected/2a612284-a8f8-4887-b09d-c99028cf34be-kube-api-access-2zgz8\") pod \"migrator-59844c95c7-4rhxp\" (UID: \"2a612284-a8f8-4887-b09d-c99028cf34be\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4rhxp" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.709901 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/794c1021-7b0b-4f44-b422-9a17a1c969c7-serving-cert\") pod \"service-ca-operator-777779d784-nzqkt\" (UID: \"794c1021-7b0b-4f44-b422-9a17a1c969c7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nzqkt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.709975 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrfqg\" (UniqueName: \"kubernetes.io/projected/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31-kube-api-access-vrfqg\") pod \"marketplace-operator-79b997595-x6fbb\" (UID: \"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31\") " pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710017 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/77416f3b-337a-43d7-9c77-2b2e86d59d45-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kdcqx\" (UID: \"77416f3b-337a-43d7-9c77-2b2e86d59d45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710074 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794c1021-7b0b-4f44-b422-9a17a1c969c7-config\") pod \"service-ca-operator-777779d784-nzqkt\" (UID: \"794c1021-7b0b-4f44-b422-9a17a1c969c7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nzqkt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710095 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c60f1300-c5d8-4993-8332-e2ca6008c358-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4jmhk\" (UID: \"c60f1300-c5d8-4993-8332-e2ca6008c358\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4jmhk" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710116 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c439ba4c-3583-4d35-b586-c97f525345a6-trusted-ca\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710137 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/09662517-7f1a-428a-ae36-ba154bede835-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-khxjg\" (UID: \"09662517-7f1a-428a-ae36-ba154bede835\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-khxjg" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710255 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/77416f3b-337a-43d7-9c77-2b2e86d59d45-metrics-tls\") pod \"ingress-operator-5b745b69d9-kdcqx\" (UID: \"77416f3b-337a-43d7-9c77-2b2e86d59d45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710331 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x6fbb\" (UID: \"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31\") " pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710413 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b04ef3d-c348-4c0a-8c43-ba2e41a1695c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4xxrs\" (UID: \"2b04ef3d-c348-4c0a-8c43-ba2e41a1695c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xxrs" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710457 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c439ba4c-3583-4d35-b586-c97f525345a6-registry-tls\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710503 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9eec41fc-17f3-4145-8eb5-77c45199ceaa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hbk6h\" (UID: \"9eec41fc-17f3-4145-8eb5-77c45199ceaa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hbk6h" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710523 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1419c0b-876f-4c86-92ef-ce9d66f8849c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9k2sh\" (UID: \"a1419c0b-876f-4c86-92ef-ce9d66f8849c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9k2sh" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710546 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sfbn\" (UniqueName: \"kubernetes.io/projected/794c1021-7b0b-4f44-b422-9a17a1c969c7-kube-api-access-7sfbn\") pod \"service-ca-operator-777779d784-nzqkt\" (UID: \"794c1021-7b0b-4f44-b422-9a17a1c969c7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nzqkt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710566 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x6fbb\" (UID: \"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31\") " pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710587 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdrnd\" (UniqueName: \"kubernetes.io/projected/a1419c0b-876f-4c86-92ef-ce9d66f8849c-kube-api-access-fdrnd\") pod \"kube-storage-version-migrator-operator-b67b599dd-9k2sh\" (UID: \"a1419c0b-876f-4c86-92ef-ce9d66f8849c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9k2sh" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710609 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c439ba4c-3583-4d35-b586-c97f525345a6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710628 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c439ba4c-3583-4d35-b586-c97f525345a6-bound-sa-token\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710813 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c439ba4c-3583-4d35-b586-c97f525345a6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710832 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c60f1300-c5d8-4993-8332-e2ca6008c358-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4jmhk\" (UID: \"c60f1300-c5d8-4993-8332-e2ca6008c358\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4jmhk" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710865 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9eec41fc-17f3-4145-8eb5-77c45199ceaa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hbk6h\" (UID: \"9eec41fc-17f3-4145-8eb5-77c45199ceaa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hbk6h" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.710973 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjth6\" (UniqueName: \"kubernetes.io/projected/2b04ef3d-c348-4c0a-8c43-ba2e41a1695c-kube-api-access-pjth6\") pod \"package-server-manager-789f6589d5-4xxrs\" (UID: \"2b04ef3d-c348-4c0a-8c43-ba2e41a1695c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xxrs" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.711009 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vgqs\" (UniqueName: \"kubernetes.io/projected/c439ba4c-3583-4d35-b586-c97f525345a6-kube-api-access-4vgqs\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.711032 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjzrg\" (UniqueName: \"kubernetes.io/projected/77416f3b-337a-43d7-9c77-2b2e86d59d45-kube-api-access-sjzrg\") pod \"ingress-operator-5b745b69d9-kdcqx\" (UID: \"77416f3b-337a-43d7-9c77-2b2e86d59d45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.718444 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" Oct 08 14:25:17 crc kubenswrapper[4624]: E1008 14:25:17.721477 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:18.221460526 +0000 UTC m=+143.372395793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.729052 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.743143 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v"] Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.750486 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp"] Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.779559 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84"] Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.811619 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:17 crc kubenswrapper[4624]: E1008 14:25:17.811852 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:18.311821411 +0000 UTC m=+143.462756488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.811900 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zgz8\" (UniqueName: \"kubernetes.io/projected/2a612284-a8f8-4887-b09d-c99028cf34be-kube-api-access-2zgz8\") pod \"migrator-59844c95c7-4rhxp\" (UID: \"2a612284-a8f8-4887-b09d-c99028cf34be\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4rhxp" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.811937 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/794c1021-7b0b-4f44-b422-9a17a1c969c7-serving-cert\") pod \"service-ca-operator-777779d784-nzqkt\" (UID: \"794c1021-7b0b-4f44-b422-9a17a1c969c7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nzqkt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.811978 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrfqg\" (UniqueName: \"kubernetes.io/projected/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31-kube-api-access-vrfqg\") pod \"marketplace-operator-79b997595-x6fbb\" (UID: \"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31\") " pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.812012 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/77416f3b-337a-43d7-9c77-2b2e86d59d45-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kdcqx\" (UID: \"77416f3b-337a-43d7-9c77-2b2e86d59d45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.812069 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794c1021-7b0b-4f44-b422-9a17a1c969c7-config\") pod \"service-ca-operator-777779d784-nzqkt\" (UID: \"794c1021-7b0b-4f44-b422-9a17a1c969c7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nzqkt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.812139 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c60f1300-c5d8-4993-8332-e2ca6008c358-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4jmhk\" (UID: \"c60f1300-c5d8-4993-8332-e2ca6008c358\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4jmhk" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.812175 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/66542923-c777-4a99-a3a7-9ab975b8b0c3-mountpoint-dir\") pod \"csi-hostpathplugin-wmls6\" (UID: \"66542923-c777-4a99-a3a7-9ab975b8b0c3\") " pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.812203 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c439ba4c-3583-4d35-b586-c97f525345a6-trusted-ca\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.812225 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/09662517-7f1a-428a-ae36-ba154bede835-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-khxjg\" (UID: \"09662517-7f1a-428a-ae36-ba154bede835\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-khxjg" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.812295 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/77416f3b-337a-43d7-9c77-2b2e86d59d45-metrics-tls\") pod \"ingress-operator-5b745b69d9-kdcqx\" (UID: \"77416f3b-337a-43d7-9c77-2b2e86d59d45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.812322 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/66542923-c777-4a99-a3a7-9ab975b8b0c3-csi-data-dir\") pod \"csi-hostpathplugin-wmls6\" (UID: \"66542923-c777-4a99-a3a7-9ab975b8b0c3\") " pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.812370 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/30404ab5-dce0-4514-bcc3-5ee7b2f6afb9-node-bootstrap-token\") pod \"machine-config-server-w4d7d\" (UID: \"30404ab5-dce0-4514-bcc3-5ee7b2f6afb9\") " pod="openshift-machine-config-operator/machine-config-server-w4d7d" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.812406 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x6fbb\" (UID: \"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31\") " pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.812432 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf58b563-ce32-45f3-8d35-f3c71f49f8fe-config-volume\") pod \"dns-default-px79m\" (UID: \"bf58b563-ce32-45f3-8d35-f3c71f49f8fe\") " pod="openshift-dns/dns-default-px79m" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.813920 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c439ba4c-3583-4d35-b586-c97f525345a6-trusted-ca\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.815568 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/794c1021-7b0b-4f44-b422-9a17a1c969c7-serving-cert\") pod \"service-ca-operator-777779d784-nzqkt\" (UID: \"794c1021-7b0b-4f44-b422-9a17a1c969c7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nzqkt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.815755 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/04aba85b-36e3-4fb4-9915-c5dcae83bb0f-srv-cert\") pod \"olm-operator-6b444d44fb-76ddt\" (UID: \"04aba85b-36e3-4fb4-9915-c5dcae83bb0f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.816040 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c439ba4c-3583-4d35-b586-c97f525345a6-registry-tls\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.816097 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b04ef3d-c348-4c0a-8c43-ba2e41a1695c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4xxrs\" (UID: \"2b04ef3d-c348-4c0a-8c43-ba2e41a1695c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xxrs" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.817059 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sfbn\" (UniqueName: \"kubernetes.io/projected/794c1021-7b0b-4f44-b422-9a17a1c969c7-kube-api-access-7sfbn\") pod \"service-ca-operator-777779d784-nzqkt\" (UID: \"794c1021-7b0b-4f44-b422-9a17a1c969c7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nzqkt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.817159 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9eec41fc-17f3-4145-8eb5-77c45199ceaa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hbk6h\" (UID: \"9eec41fc-17f3-4145-8eb5-77c45199ceaa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hbk6h" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.817177 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1419c0b-876f-4c86-92ef-ce9d66f8849c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9k2sh\" (UID: \"a1419c0b-876f-4c86-92ef-ce9d66f8849c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9k2sh" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.817228 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x6fbb\" (UID: \"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31\") " pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.817246 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e560999a-1afb-48ee-a544-df38bd7b2fa6-cert\") pod \"ingress-canary-pxc8x\" (UID: \"e560999a-1afb-48ee-a544-df38bd7b2fa6\") " pod="openshift-ingress-canary/ingress-canary-pxc8x" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.817275 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c439ba4c-3583-4d35-b586-c97f525345a6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.817294 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdrnd\" (UniqueName: \"kubernetes.io/projected/a1419c0b-876f-4c86-92ef-ce9d66f8849c-kube-api-access-fdrnd\") pod \"kube-storage-version-migrator-operator-b67b599dd-9k2sh\" (UID: \"a1419c0b-876f-4c86-92ef-ce9d66f8849c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9k2sh" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.817337 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c439ba4c-3583-4d35-b586-c97f525345a6-bound-sa-token\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.817354 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/04aba85b-36e3-4fb4-9915-c5dcae83bb0f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-76ddt\" (UID: \"04aba85b-36e3-4fb4-9915-c5dcae83bb0f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.817389 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/66542923-c777-4a99-a3a7-9ab975b8b0c3-registration-dir\") pod \"csi-hostpathplugin-wmls6\" (UID: \"66542923-c777-4a99-a3a7-9ab975b8b0c3\") " pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.817415 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c439ba4c-3583-4d35-b586-c97f525345a6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.817444 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c60f1300-c5d8-4993-8332-e2ca6008c358-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4jmhk\" (UID: \"c60f1300-c5d8-4993-8332-e2ca6008c358\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4jmhk" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.817471 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9eec41fc-17f3-4145-8eb5-77c45199ceaa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hbk6h\" (UID: \"9eec41fc-17f3-4145-8eb5-77c45199ceaa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hbk6h" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.821305 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqz9l\" (UniqueName: \"kubernetes.io/projected/e560999a-1afb-48ee-a544-df38bd7b2fa6-kube-api-access-xqz9l\") pod \"ingress-canary-pxc8x\" (UID: \"e560999a-1afb-48ee-a544-df38bd7b2fa6\") " pod="openshift-ingress-canary/ingress-canary-pxc8x" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.821487 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjth6\" (UniqueName: \"kubernetes.io/projected/2b04ef3d-c348-4c0a-8c43-ba2e41a1695c-kube-api-access-pjth6\") pod \"package-server-manager-789f6589d5-4xxrs\" (UID: \"2b04ef3d-c348-4c0a-8c43-ba2e41a1695c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xxrs" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.821518 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf58b563-ce32-45f3-8d35-f3c71f49f8fe-metrics-tls\") pod \"dns-default-px79m\" (UID: \"bf58b563-ce32-45f3-8d35-f3c71f49f8fe\") " pod="openshift-dns/dns-default-px79m" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.821550 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vgqs\" (UniqueName: \"kubernetes.io/projected/c439ba4c-3583-4d35-b586-c97f525345a6-kube-api-access-4vgqs\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.821573 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gxh2\" (UniqueName: \"kubernetes.io/projected/30404ab5-dce0-4514-bcc3-5ee7b2f6afb9-kube-api-access-4gxh2\") pod \"machine-config-server-w4d7d\" (UID: \"30404ab5-dce0-4514-bcc3-5ee7b2f6afb9\") " pod="openshift-machine-config-operator/machine-config-server-w4d7d" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.821596 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjzrg\" (UniqueName: \"kubernetes.io/projected/77416f3b-337a-43d7-9c77-2b2e86d59d45-kube-api-access-sjzrg\") pod \"ingress-operator-5b745b69d9-kdcqx\" (UID: \"77416f3b-337a-43d7-9c77-2b2e86d59d45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.821618 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-642nn\" (UniqueName: \"kubernetes.io/projected/66542923-c777-4a99-a3a7-9ab975b8b0c3-kube-api-access-642nn\") pod \"csi-hostpathplugin-wmls6\" (UID: \"66542923-c777-4a99-a3a7-9ab975b8b0c3\") " pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.822725 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/09662517-7f1a-428a-ae36-ba154bede835-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-khxjg\" (UID: \"09662517-7f1a-428a-ae36-ba154bede835\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-khxjg" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.823336 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x6fbb\" (UID: \"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31\") " pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.823956 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c439ba4c-3583-4d35-b586-c97f525345a6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.825819 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c439ba4c-3583-4d35-b586-c97f525345a6-registry-tls\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.825894 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b04ef3d-c348-4c0a-8c43-ba2e41a1695c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4xxrs\" (UID: \"2b04ef3d-c348-4c0a-8c43-ba2e41a1695c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xxrs" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.830460 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9eec41fc-17f3-4145-8eb5-77c45199ceaa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hbk6h\" (UID: \"9eec41fc-17f3-4145-8eb5-77c45199ceaa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hbk6h" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.831882 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x6fbb\" (UID: \"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31\") " pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.833231 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd8kt\" (UniqueName: \"kubernetes.io/projected/bf58b563-ce32-45f3-8d35-f3c71f49f8fe-kube-api-access-fd8kt\") pod \"dns-default-px79m\" (UID: \"bf58b563-ce32-45f3-8d35-f3c71f49f8fe\") " pod="openshift-dns/dns-default-px79m" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.833286 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77416f3b-337a-43d7-9c77-2b2e86d59d45-trusted-ca\") pod \"ingress-operator-5b745b69d9-kdcqx\" (UID: \"77416f3b-337a-43d7-9c77-2b2e86d59d45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.833405 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/66542923-c777-4a99-a3a7-9ab975b8b0c3-plugins-dir\") pod \"csi-hostpathplugin-wmls6\" (UID: \"66542923-c777-4a99-a3a7-9ab975b8b0c3\") " pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.833480 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c439ba4c-3583-4d35-b586-c97f525345a6-registry-certificates\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.833535 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/30404ab5-dce0-4514-bcc3-5ee7b2f6afb9-certs\") pod \"machine-config-server-w4d7d\" (UID: \"30404ab5-dce0-4514-bcc3-5ee7b2f6afb9\") " pod="openshift-machine-config-operator/machine-config-server-w4d7d" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.833588 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.833668 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c60f1300-c5d8-4993-8332-e2ca6008c358-config\") pod \"kube-apiserver-operator-766d6c64bb-4jmhk\" (UID: \"c60f1300-c5d8-4993-8332-e2ca6008c358\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4jmhk" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.833714 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/66542923-c777-4a99-a3a7-9ab975b8b0c3-socket-dir\") pod \"csi-hostpathplugin-wmls6\" (UID: \"66542923-c777-4a99-a3a7-9ab975b8b0c3\") " pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.833782 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chlzf\" (UniqueName: \"kubernetes.io/projected/04aba85b-36e3-4fb4-9915-c5dcae83bb0f-kube-api-access-chlzf\") pod \"olm-operator-6b444d44fb-76ddt\" (UID: \"04aba85b-36e3-4fb4-9915-c5dcae83bb0f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.833861 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eec41fc-17f3-4145-8eb5-77c45199ceaa-config\") pod \"kube-controller-manager-operator-78b949d7b-hbk6h\" (UID: \"9eec41fc-17f3-4145-8eb5-77c45199ceaa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hbk6h" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.833897 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1419c0b-876f-4c86-92ef-ce9d66f8849c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9k2sh\" (UID: \"a1419c0b-876f-4c86-92ef-ce9d66f8849c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9k2sh" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.833964 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bsmm\" (UniqueName: \"kubernetes.io/projected/09662517-7f1a-428a-ae36-ba154bede835-kube-api-access-7bsmm\") pod \"multus-admission-controller-857f4d67dd-khxjg\" (UID: \"09662517-7f1a-428a-ae36-ba154bede835\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-khxjg" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.835075 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1419c0b-876f-4c86-92ef-ce9d66f8849c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9k2sh\" (UID: \"a1419c0b-876f-4c86-92ef-ce9d66f8849c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9k2sh" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.835807 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/77416f3b-337a-43d7-9c77-2b2e86d59d45-metrics-tls\") pod \"ingress-operator-5b745b69d9-kdcqx\" (UID: \"77416f3b-337a-43d7-9c77-2b2e86d59d45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.835827 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77416f3b-337a-43d7-9c77-2b2e86d59d45-trusted-ca\") pod \"ingress-operator-5b745b69d9-kdcqx\" (UID: \"77416f3b-337a-43d7-9c77-2b2e86d59d45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.836363 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c439ba4c-3583-4d35-b586-c97f525345a6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.837508 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eec41fc-17f3-4145-8eb5-77c45199ceaa-config\") pod \"kube-controller-manager-operator-78b949d7b-hbk6h\" (UID: \"9eec41fc-17f3-4145-8eb5-77c45199ceaa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hbk6h" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.837579 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1419c0b-876f-4c86-92ef-ce9d66f8849c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9k2sh\" (UID: \"a1419c0b-876f-4c86-92ef-ce9d66f8849c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9k2sh" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.839031 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c439ba4c-3583-4d35-b586-c97f525345a6-registry-certificates\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:17 crc kubenswrapper[4624]: E1008 14:25:17.839946 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:18.339923555 +0000 UTC m=+143.490858632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.855827 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll"] Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.865705 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zgz8\" (UniqueName: \"kubernetes.io/projected/2a612284-a8f8-4887-b09d-c99028cf34be-kube-api-access-2zgz8\") pod \"migrator-59844c95c7-4rhxp\" (UID: \"2a612284-a8f8-4887-b09d-c99028cf34be\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4rhxp" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.876093 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794c1021-7b0b-4f44-b422-9a17a1c969c7-config\") pod \"service-ca-operator-777779d784-nzqkt\" (UID: \"794c1021-7b0b-4f44-b422-9a17a1c969c7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nzqkt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.878479 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c60f1300-c5d8-4993-8332-e2ca6008c358-config\") pod \"kube-apiserver-operator-766d6c64bb-4jmhk\" (UID: \"c60f1300-c5d8-4993-8332-e2ca6008c358\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4jmhk" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.879399 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c60f1300-c5d8-4993-8332-e2ca6008c358-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4jmhk\" (UID: \"c60f1300-c5d8-4993-8332-e2ca6008c358\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4jmhk" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.891797 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/77416f3b-337a-43d7-9c77-2b2e86d59d45-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kdcqx\" (UID: \"77416f3b-337a-43d7-9c77-2b2e86d59d45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.896928 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-97pqq"] Oct 08 14:25:17 crc kubenswrapper[4624]: W1008 14:25:17.901326 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf0cea7a_1f29_4af5_b8f1_2e1c8873610c.slice/crio-59715a8b1ab5988ae61925eaa419662fcd1eec402035e1c981ae442859b2f3de WatchSource:0}: Error finding container 59715a8b1ab5988ae61925eaa419662fcd1eec402035e1c981ae442859b2f3de: Status 404 returned error can't find the container with id 59715a8b1ab5988ae61925eaa419662fcd1eec402035e1c981ae442859b2f3de Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.915561 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrfqg\" (UniqueName: \"kubernetes.io/projected/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31-kube-api-access-vrfqg\") pod \"marketplace-operator-79b997595-x6fbb\" (UID: \"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31\") " pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.925122 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c60f1300-c5d8-4993-8332-e2ca6008c358-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4jmhk\" (UID: \"c60f1300-c5d8-4993-8332-e2ca6008c358\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4jmhk" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.935500 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.936053 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e560999a-1afb-48ee-a544-df38bd7b2fa6-cert\") pod \"ingress-canary-pxc8x\" (UID: \"e560999a-1afb-48ee-a544-df38bd7b2fa6\") " pod="openshift-ingress-canary/ingress-canary-pxc8x" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.936125 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/04aba85b-36e3-4fb4-9915-c5dcae83bb0f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-76ddt\" (UID: \"04aba85b-36e3-4fb4-9915-c5dcae83bb0f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.936173 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/66542923-c777-4a99-a3a7-9ab975b8b0c3-registration-dir\") pod \"csi-hostpathplugin-wmls6\" (UID: \"66542923-c777-4a99-a3a7-9ab975b8b0c3\") " pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.936202 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqz9l\" (UniqueName: \"kubernetes.io/projected/e560999a-1afb-48ee-a544-df38bd7b2fa6-kube-api-access-xqz9l\") pod \"ingress-canary-pxc8x\" (UID: \"e560999a-1afb-48ee-a544-df38bd7b2fa6\") " pod="openshift-ingress-canary/ingress-canary-pxc8x" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.936248 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf58b563-ce32-45f3-8d35-f3c71f49f8fe-metrics-tls\") pod \"dns-default-px79m\" (UID: \"bf58b563-ce32-45f3-8d35-f3c71f49f8fe\") " pod="openshift-dns/dns-default-px79m" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.936286 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gxh2\" (UniqueName: \"kubernetes.io/projected/30404ab5-dce0-4514-bcc3-5ee7b2f6afb9-kube-api-access-4gxh2\") pod \"machine-config-server-w4d7d\" (UID: \"30404ab5-dce0-4514-bcc3-5ee7b2f6afb9\") " pod="openshift-machine-config-operator/machine-config-server-w4d7d" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.936321 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-642nn\" (UniqueName: \"kubernetes.io/projected/66542923-c777-4a99-a3a7-9ab975b8b0c3-kube-api-access-642nn\") pod \"csi-hostpathplugin-wmls6\" (UID: \"66542923-c777-4a99-a3a7-9ab975b8b0c3\") " pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.936356 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd8kt\" (UniqueName: \"kubernetes.io/projected/bf58b563-ce32-45f3-8d35-f3c71f49f8fe-kube-api-access-fd8kt\") pod \"dns-default-px79m\" (UID: \"bf58b563-ce32-45f3-8d35-f3c71f49f8fe\") " pod="openshift-dns/dns-default-px79m" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.936405 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/66542923-c777-4a99-a3a7-9ab975b8b0c3-plugins-dir\") pod \"csi-hostpathplugin-wmls6\" (UID: \"66542923-c777-4a99-a3a7-9ab975b8b0c3\") " pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.936433 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/30404ab5-dce0-4514-bcc3-5ee7b2f6afb9-certs\") pod \"machine-config-server-w4d7d\" (UID: \"30404ab5-dce0-4514-bcc3-5ee7b2f6afb9\") " pod="openshift-machine-config-operator/machine-config-server-w4d7d" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.936474 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/66542923-c777-4a99-a3a7-9ab975b8b0c3-socket-dir\") pod \"csi-hostpathplugin-wmls6\" (UID: \"66542923-c777-4a99-a3a7-9ab975b8b0c3\") " pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.936497 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chlzf\" (UniqueName: \"kubernetes.io/projected/04aba85b-36e3-4fb4-9915-c5dcae83bb0f-kube-api-access-chlzf\") pod \"olm-operator-6b444d44fb-76ddt\" (UID: \"04aba85b-36e3-4fb4-9915-c5dcae83bb0f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.936573 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/66542923-c777-4a99-a3a7-9ab975b8b0c3-mountpoint-dir\") pod \"csi-hostpathplugin-wmls6\" (UID: \"66542923-c777-4a99-a3a7-9ab975b8b0c3\") " pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.936607 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/66542923-c777-4a99-a3a7-9ab975b8b0c3-csi-data-dir\") pod \"csi-hostpathplugin-wmls6\" (UID: \"66542923-c777-4a99-a3a7-9ab975b8b0c3\") " pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.936652 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/30404ab5-dce0-4514-bcc3-5ee7b2f6afb9-node-bootstrap-token\") pod \"machine-config-server-w4d7d\" (UID: \"30404ab5-dce0-4514-bcc3-5ee7b2f6afb9\") " pod="openshift-machine-config-operator/machine-config-server-w4d7d" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.936680 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf58b563-ce32-45f3-8d35-f3c71f49f8fe-config-volume\") pod \"dns-default-px79m\" (UID: \"bf58b563-ce32-45f3-8d35-f3c71f49f8fe\") " pod="openshift-dns/dns-default-px79m" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.936730 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/04aba85b-36e3-4fb4-9915-c5dcae83bb0f-srv-cert\") pod \"olm-operator-6b444d44fb-76ddt\" (UID: \"04aba85b-36e3-4fb4-9915-c5dcae83bb0f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" Oct 08 14:25:17 crc kubenswrapper[4624]: E1008 14:25:17.937225 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:18.437193636 +0000 UTC m=+143.588128923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.937873 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/66542923-c777-4a99-a3a7-9ab975b8b0c3-plugins-dir\") pod \"csi-hostpathplugin-wmls6\" (UID: \"66542923-c777-4a99-a3a7-9ab975b8b0c3\") " pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.937977 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/66542923-c777-4a99-a3a7-9ab975b8b0c3-registration-dir\") pod \"csi-hostpathplugin-wmls6\" (UID: \"66542923-c777-4a99-a3a7-9ab975b8b0c3\") " pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.938584 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/66542923-c777-4a99-a3a7-9ab975b8b0c3-mountpoint-dir\") pod \"csi-hostpathplugin-wmls6\" (UID: \"66542923-c777-4a99-a3a7-9ab975b8b0c3\") " pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.938785 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/66542923-c777-4a99-a3a7-9ab975b8b0c3-csi-data-dir\") pod \"csi-hostpathplugin-wmls6\" (UID: \"66542923-c777-4a99-a3a7-9ab975b8b0c3\") " pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.938858 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/66542923-c777-4a99-a3a7-9ab975b8b0c3-socket-dir\") pod \"csi-hostpathplugin-wmls6\" (UID: \"66542923-c777-4a99-a3a7-9ab975b8b0c3\") " pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:17 crc kubenswrapper[4624]: W1008 14:25:17.939943 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9627e3_628f_42ee_b6c7_e203097ad785.slice/crio-a74e0076362756f7522a5ac9c24334d68654cabe0ec249d487e8231d791d7932 WatchSource:0}: Error finding container a74e0076362756f7522a5ac9c24334d68654cabe0ec249d487e8231d791d7932: Status 404 returned error can't find the container with id a74e0076362756f7522a5ac9c24334d68654cabe0ec249d487e8231d791d7932 Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.941251 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/04aba85b-36e3-4fb4-9915-c5dcae83bb0f-srv-cert\") pod \"olm-operator-6b444d44fb-76ddt\" (UID: \"04aba85b-36e3-4fb4-9915-c5dcae83bb0f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.942536 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/04aba85b-36e3-4fb4-9915-c5dcae83bb0f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-76ddt\" (UID: \"04aba85b-36e3-4fb4-9915-c5dcae83bb0f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.944934 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e560999a-1afb-48ee-a544-df38bd7b2fa6-cert\") pod \"ingress-canary-pxc8x\" (UID: \"e560999a-1afb-48ee-a544-df38bd7b2fa6\") " pod="openshift-ingress-canary/ingress-canary-pxc8x" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.946399 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf58b563-ce32-45f3-8d35-f3c71f49f8fe-metrics-tls\") pod \"dns-default-px79m\" (UID: \"bf58b563-ce32-45f3-8d35-f3c71f49f8fe\") " pod="openshift-dns/dns-default-px79m" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.947047 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/30404ab5-dce0-4514-bcc3-5ee7b2f6afb9-node-bootstrap-token\") pod \"machine-config-server-w4d7d\" (UID: \"30404ab5-dce0-4514-bcc3-5ee7b2f6afb9\") " pod="openshift-machine-config-operator/machine-config-server-w4d7d" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.947529 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf58b563-ce32-45f3-8d35-f3c71f49f8fe-config-volume\") pod \"dns-default-px79m\" (UID: \"bf58b563-ce32-45f3-8d35-f3c71f49f8fe\") " pod="openshift-dns/dns-default-px79m" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.948077 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/30404ab5-dce0-4514-bcc3-5ee7b2f6afb9-certs\") pod \"machine-config-server-w4d7d\" (UID: \"30404ab5-dce0-4514-bcc3-5ee7b2f6afb9\") " pod="openshift-machine-config-operator/machine-config-server-w4d7d" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.948440 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sfbn\" (UniqueName: \"kubernetes.io/projected/794c1021-7b0b-4f44-b422-9a17a1c969c7-kube-api-access-7sfbn\") pod \"service-ca-operator-777779d784-nzqkt\" (UID: \"794c1021-7b0b-4f44-b422-9a17a1c969c7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nzqkt" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.966072 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdrnd\" (UniqueName: \"kubernetes.io/projected/a1419c0b-876f-4c86-92ef-ce9d66f8849c-kube-api-access-fdrnd\") pod \"kube-storage-version-migrator-operator-b67b599dd-9k2sh\" (UID: \"a1419c0b-876f-4c86-92ef-ce9d66f8849c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9k2sh" Oct 08 14:25:17 crc kubenswrapper[4624]: I1008 14:25:17.985696 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c439ba4c-3583-4d35-b586-c97f525345a6-bound-sa-token\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.004153 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjzrg\" (UniqueName: \"kubernetes.io/projected/77416f3b-337a-43d7-9c77-2b2e86d59d45-kube-api-access-sjzrg\") pod \"ingress-operator-5b745b69d9-kdcqx\" (UID: \"77416f3b-337a-43d7-9c77-2b2e86d59d45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.027043 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjth6\" (UniqueName: \"kubernetes.io/projected/2b04ef3d-c348-4c0a-8c43-ba2e41a1695c-kube-api-access-pjth6\") pod \"package-server-manager-789f6589d5-4xxrs\" (UID: \"2b04ef3d-c348-4c0a-8c43-ba2e41a1695c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xxrs" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.037836 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:18 crc kubenswrapper[4624]: E1008 14:25:18.038210 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:18.538193887 +0000 UTC m=+143.689128964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.043621 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vgqs\" (UniqueName: \"kubernetes.io/projected/c439ba4c-3583-4d35-b586-c97f525345a6-kube-api-access-4vgqs\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.063620 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9eec41fc-17f3-4145-8eb5-77c45199ceaa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hbk6h\" (UID: \"9eec41fc-17f3-4145-8eb5-77c45199ceaa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hbk6h" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.067434 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4jmhk" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.087546 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bsmm\" (UniqueName: \"kubernetes.io/projected/09662517-7f1a-428a-ae36-ba154bede835-kube-api-access-7bsmm\") pod \"multus-admission-controller-857f4d67dd-khxjg\" (UID: \"09662517-7f1a-428a-ae36-ba154bede835\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-khxjg" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.092672 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.116069 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nzqkt" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.128159 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd8kt\" (UniqueName: \"kubernetes.io/projected/bf58b563-ce32-45f3-8d35-f3c71f49f8fe-kube-api-access-fd8kt\") pod \"dns-default-px79m\" (UID: \"bf58b563-ce32-45f3-8d35-f3c71f49f8fe\") " pod="openshift-dns/dns-default-px79m" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.138956 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:18 crc kubenswrapper[4624]: E1008 14:25:18.139299 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:18.63928462 +0000 UTC m=+143.790219697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.145903 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4rhxp" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.146507 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqz9l\" (UniqueName: \"kubernetes.io/projected/e560999a-1afb-48ee-a544-df38bd7b2fa6-kube-api-access-xqz9l\") pod \"ingress-canary-pxc8x\" (UID: \"e560999a-1afb-48ee-a544-df38bd7b2fa6\") " pod="openshift-ingress-canary/ingress-canary-pxc8x" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.158024 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hbk6h" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.174214 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9k2sh" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.187467 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gxh2\" (UniqueName: \"kubernetes.io/projected/30404ab5-dce0-4514-bcc3-5ee7b2f6afb9-kube-api-access-4gxh2\") pod \"machine-config-server-w4d7d\" (UID: \"30404ab5-dce0-4514-bcc3-5ee7b2f6afb9\") " pod="openshift-machine-config-operator/machine-config-server-w4d7d" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.204072 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-642nn\" (UniqueName: \"kubernetes.io/projected/66542923-c777-4a99-a3a7-9ab975b8b0c3-kube-api-access-642nn\") pod \"csi-hostpathplugin-wmls6\" (UID: \"66542923-c777-4a99-a3a7-9ab975b8b0c3\") " pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.216140 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chlzf\" (UniqueName: \"kubernetes.io/projected/04aba85b-36e3-4fb4-9915-c5dcae83bb0f-kube-api-access-chlzf\") pod \"olm-operator-6b444d44fb-76ddt\" (UID: \"04aba85b-36e3-4fb4-9915-c5dcae83bb0f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.218845 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" event={"ID":"ebee26eb-b32e-459d-b4d2-7a36a325f08b","Type":"ContainerStarted","Data":"4079ed52546fd7ba0d5657c8e23169f27d8a4886d5fbf6b21e4a0e8bb167cb8f"} Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.221760 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" event={"ID":"809bf602-1dba-4d74-8f71-18add0de807a","Type":"ContainerStarted","Data":"6e52de4f0d95bbaebb56c321316350b48fcdb5862bb6723623dc2a0eec282e5a"} Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.230435 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5" event={"ID":"d1e4ee61-1c1c-41eb-82a7-42b6313d863c","Type":"ContainerStarted","Data":"c2001baa011f12e99bcf130ff8a0fd548ac2c7e88e011301aef6bfdf0cdbcaba"} Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.234318 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xxrs" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.240241 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" event={"ID":"df49d2ff-640d-4cf1-812c-8b7275df6292","Type":"ContainerStarted","Data":"d2575ea76e8b925c67518d39812533d596965836cb87ebbe72a29d579faaf8db"} Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.241225 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:18 crc kubenswrapper[4624]: E1008 14:25:18.241573 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:18.741560195 +0000 UTC m=+143.892495272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.255283 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" event={"ID":"e63c9d66-5727-4750-a48d-f55d5d6358a1","Type":"ContainerStarted","Data":"546879dcf9c2b60a4f4f2f3afb428ddadf63244359830d2a941a53f1e21ef3c7"} Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.296402 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-97pqq" event={"ID":"c00e7c1b-2054-427e-ad58-03b97f0ceb83","Type":"ContainerStarted","Data":"056d29363132d83a91d32bc18c66cbc9e546da71410488d8f50e2d00f5a9745a"} Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.297650 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll" event={"ID":"df0cea7a-1f29-4af5-b8f1-2e1c8873610c","Type":"ContainerStarted","Data":"59715a8b1ab5988ae61925eaa419662fcd1eec402035e1c981ae442859b2f3de"} Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.298742 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" event={"ID":"03313d2c-4c05-4a55-a595-cf633b935c29","Type":"ContainerStarted","Data":"f8af31dd925ac51c15f3afa768863811cdb7f74eeeabc95479ee817003853517"} Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.299245 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.299774 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" event={"ID":"3c901fd1-c23c-47f0-b44f-102a1965abd6","Type":"ContainerStarted","Data":"bff63af2ac2301521915b3d57bf6bae99ea4ae12150ecec39097b403648cc367"} Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.301607 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp" event={"ID":"6a9627e3-628f-42ee-b6c7-e203097ad785","Type":"ContainerStarted","Data":"a74e0076362756f7522a5ac9c24334d68654cabe0ec249d487e8231d791d7932"} Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.307244 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z" event={"ID":"2ca17841-28c4-4a05-995c-6324cea0ffbd","Type":"ContainerStarted","Data":"fef07533865c10fd7b9584565a406a5e92ac0f5714d518ec4a6ad378f7f83c42"} Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.313236 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w755s" event={"ID":"c95a7fea-e442-4f6f-b6df-0b8f62c5a13f","Type":"ContainerStarted","Data":"b6cafb4cfcc6bf9d5551e9cc9197925551471eb834edc2bdbac20e43b1395333"} Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.316282 4624 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-gnrmc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.316313 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" podUID="48945eb6-75ee-4ed0-bc04-f83b5888d85d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.318038 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-kl87s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.318111 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kl87s" podUID="60fc8c95-75b8-4032-b407-c0b21022da37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.319214 4624 patch_prober.go:28] interesting pod/console-operator-58897d9998-gkmv8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.319264 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gkmv8" podUID="9f8eefe6-4147-465d-adda-f0ddf9530abb" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.344903 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:18 crc kubenswrapper[4624]: E1008 14:25:18.345776 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:18.845762582 +0000 UTC m=+143.996697659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.348334 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-khxjg" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.351621 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.369477 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-px79m" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.380921 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pxc8x" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.402976 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wmls6" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.409911 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-w4d7d" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.450611 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:18 crc kubenswrapper[4624]: E1008 14:25:18.450887 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:18.950870933 +0000 UTC m=+144.101806100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.526562 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b"] Oct 08 14:25:18 crc kubenswrapper[4624]: W1008 14:25:18.544326 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd96a6e0b_5e82_48ad_b930_b7b82620830a.slice/crio-0cd4dde9e27c9fd7225ff9f0831ff656b55c98290e7e278698496056e07be703 WatchSource:0}: Error finding container 0cd4dde9e27c9fd7225ff9f0831ff656b55c98290e7e278698496056e07be703: Status 404 returned error can't find the container with id 0cd4dde9e27c9fd7225ff9f0831ff656b55c98290e7e278698496056e07be703 Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.557087 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:18 crc kubenswrapper[4624]: E1008 14:25:18.557487 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:19.057463324 +0000 UTC m=+144.208398421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.645185 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bbffn"] Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.661163 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.661294 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hrd59"] Oct 08 14:25:18 crc kubenswrapper[4624]: E1008 14:25:18.661587 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:19.161572998 +0000 UTC m=+144.312508075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.718521 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57"] Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.724455 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nzqkt"] Oct 08 14:25:18 crc kubenswrapper[4624]: W1008 14:25:18.739790 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod100b758b_a285_49a0_a5ec_0b565dce5e1a.slice/crio-d237c488ad6e66f5a82940701d5596d24052692793f88a415104711cb7d700d6 WatchSource:0}: Error finding container d237c488ad6e66f5a82940701d5596d24052692793f88a415104711cb7d700d6: Status 404 returned error can't find the container with id d237c488ad6e66f5a82940701d5596d24052692793f88a415104711cb7d700d6 Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.761824 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:18 crc kubenswrapper[4624]: E1008 14:25:18.762049 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:19.262028874 +0000 UTC m=+144.412963971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.762127 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:18 crc kubenswrapper[4624]: E1008 14:25:18.762480 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:19.262463986 +0000 UTC m=+144.413399063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:18 crc kubenswrapper[4624]: W1008 14:25:18.808126 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3a80e27_d7fd_4b62_b5ae_9719c4f69655.slice/crio-56ce1e9f9b069100dd92fec13cd193b32855a04e11e85329a98fbb59fbd5df5b WatchSource:0}: Error finding container 56ce1e9f9b069100dd92fec13cd193b32855a04e11e85329a98fbb59fbd5df5b: Status 404 returned error can't find the container with id 56ce1e9f9b069100dd92fec13cd193b32855a04e11e85329a98fbb59fbd5df5b Oct 08 14:25:18 crc kubenswrapper[4624]: W1008 14:25:18.825042 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod794c1021_7b0b_4f44_b422_9a17a1c969c7.slice/crio-58102da578e0b9dc5c86a00d9024710e32df6be8c25bb5ed0756fde8a0713ebb WatchSource:0}: Error finding container 58102da578e0b9dc5c86a00d9024710e32df6be8c25bb5ed0756fde8a0713ebb: Status 404 returned error can't find the container with id 58102da578e0b9dc5c86a00d9024710e32df6be8c25bb5ed0756fde8a0713ebb Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.862910 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:18 crc kubenswrapper[4624]: E1008 14:25:18.863852 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:19.363832617 +0000 UTC m=+144.514767684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.964148 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4rhxp"] Oct 08 14:25:18 crc kubenswrapper[4624]: I1008 14:25:18.964822 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:18 crc kubenswrapper[4624]: E1008 14:25:18.965143 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:19.465131225 +0000 UTC m=+144.616066302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.001971 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ddpjz"] Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.028898 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4jmhk"] Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.045953 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-kl87s" podStartSLOduration=123.045932824 podStartE2EDuration="2m3.045932824s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:19.040887889 +0000 UTC m=+144.191822966" watchObservedRunningTime="2025-10-08 14:25:19.045932824 +0000 UTC m=+144.196867901" Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.069362 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:19 crc kubenswrapper[4624]: E1008 14:25:19.069725 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:19.569690072 +0000 UTC m=+144.720625149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.176525 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:19 crc kubenswrapper[4624]: E1008 14:25:19.178010 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:19.677995109 +0000 UTC m=+144.828930186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.193849 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x6fbb"] Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.238076 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" podStartSLOduration=123.238055001 podStartE2EDuration="2m3.238055001s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:19.207402668 +0000 UTC m=+144.358337745" watchObservedRunningTime="2025-10-08 14:25:19.238055001 +0000 UTC m=+144.388990078" Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.259557 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9526z" podStartSLOduration=123.259536237 podStartE2EDuration="2m3.259536237s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:19.232998355 +0000 UTC m=+144.383933432" watchObservedRunningTime="2025-10-08 14:25:19.259536237 +0000 UTC m=+144.410471314" Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.270745 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xxrs"] Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.277544 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:19 crc kubenswrapper[4624]: E1008 14:25:19.278019 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:19.778000943 +0000 UTC m=+144.928936020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.331455 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cxb9h" podStartSLOduration=123.331437277 podStartE2EDuration="2m3.331437277s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:19.329919936 +0000 UTC m=+144.480855013" watchObservedRunningTime="2025-10-08 14:25:19.331437277 +0000 UTC m=+144.482372354" Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.380856 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:19 crc kubenswrapper[4624]: E1008 14:25:19.381248 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:19.881233383 +0000 UTC m=+145.032168460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.392228 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9k2sh"] Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.416817 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" event={"ID":"72ec1d6f-5923-45ed-a5a8-ad8a268faca5","Type":"ContainerStarted","Data":"6d662d625617f9ef532f005d7580bd23167b890645fd1665e1d853a1d30678e2"} Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.439837 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w755s" event={"ID":"c95a7fea-e442-4f6f-b6df-0b8f62c5a13f","Type":"ContainerStarted","Data":"2b7ac1e369059199ee4aae2ea6844e611dd72ad9a56fa3ea18518979b78e46bc"} Oct 08 14:25:19 crc kubenswrapper[4624]: W1008 14:25:19.452214 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfba88879_d200_4b89_a22c_5b83be19012b.slice/crio-e313cd61d735777f3906c5dbbe598d5b44ef767d8c0362e9d4a73f538d2e0832 WatchSource:0}: Error finding container e313cd61d735777f3906c5dbbe598d5b44ef767d8c0362e9d4a73f538d2e0832: Status 404 returned error can't find the container with id e313cd61d735777f3906c5dbbe598d5b44ef767d8c0362e9d4a73f538d2e0832 Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.464410 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" event={"ID":"100b758b-a285-49a0-a5ec-0b565dce5e1a","Type":"ContainerStarted","Data":"d237c488ad6e66f5a82940701d5596d24052692793f88a415104711cb7d700d6"} Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.481728 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:19 crc kubenswrapper[4624]: E1008 14:25:19.482217 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:19.982148502 +0000 UTC m=+145.133083579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.482496 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:19 crc kubenswrapper[4624]: E1008 14:25:19.482805 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:19.982791799 +0000 UTC m=+145.133726876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.501493 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-d66t2" podStartSLOduration=123.501469001 podStartE2EDuration="2m3.501469001s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:19.491295657 +0000 UTC m=+144.642230734" watchObservedRunningTime="2025-10-08 14:25:19.501469001 +0000 UTC m=+144.652404078" Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.505081 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hbk6h"] Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.505140 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fbw8l" event={"ID":"d96a6e0b-5e82-48ad-b930-b7b82620830a","Type":"ContainerStarted","Data":"0cd4dde9e27c9fd7225ff9f0831ff656b55c98290e7e278698496056e07be703"} Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.505182 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nzqkt" event={"ID":"794c1021-7b0b-4f44-b422-9a17a1c969c7","Type":"ContainerStarted","Data":"58102da578e0b9dc5c86a00d9024710e32df6be8c25bb5ed0756fde8a0713ebb"} Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.527668 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4k9mv" podStartSLOduration=123.527628033 podStartE2EDuration="2m3.527628033s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:19.526957275 +0000 UTC m=+144.677892362" watchObservedRunningTime="2025-10-08 14:25:19.527628033 +0000 UTC m=+144.678563110" Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.532861 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll" event={"ID":"df0cea7a-1f29-4af5-b8f1-2e1c8873610c","Type":"ContainerStarted","Data":"407252a5d7e1f7ee74cb8d86902251b8c725122f417b8798f79afee34f08f67a"} Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.536724 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" event={"ID":"ebee26eb-b32e-459d-b4d2-7a36a325f08b","Type":"ContainerStarted","Data":"d6c6b13aec750ee7402349dd513194bce74f3cec9a4f18653fbd86bd7a0c631f"} Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.537863 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.542935 4624 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qqw9b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.542987 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" podUID="ebee26eb-b32e-459d-b4d2-7a36a325f08b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.543972 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" event={"ID":"42ae4a09-81ca-465e-85e7-38a09497805c","Type":"ContainerStarted","Data":"47083fca353a554f2af98fb26cd00beb43cfea707a39b48255a42cf96318b11b"} Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.544839 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.553963 4624 generic.go:334] "Generic (PLEG): container finished" podID="3c901fd1-c23c-47f0-b44f-102a1965abd6" containerID="fa5001bbbcbdc6073cef612669ad37fec40de51d980b235801691ab68c522cc1" exitCode=0 Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.554257 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" event={"ID":"3c901fd1-c23c-47f0-b44f-102a1965abd6","Type":"ContainerDied","Data":"fa5001bbbcbdc6073cef612669ad37fec40de51d980b235801691ab68c522cc1"} Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.555972 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" event={"ID":"e63c9d66-5727-4750-a48d-f55d5d6358a1","Type":"ContainerStarted","Data":"36bba36ad2e862f4281a58c49700cca0b565d94a82526d0c6997207fb0386eb7"} Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.558080 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5" event={"ID":"d1e4ee61-1c1c-41eb-82a7-42b6313d863c","Type":"ContainerStarted","Data":"880c75674df36150b87359c7b43f9064a31b5820198ad98e6b9d94d94f37a728"} Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.561338 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b" event={"ID":"85911902-d86c-48de-b04f-e12b5885a05c","Type":"ContainerStarted","Data":"4049184c78c30d71da6ad115eb5011f5cf59e50042e7a00a42e75f189205338b"} Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.562870 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" event={"ID":"809bf602-1dba-4d74-8f71-18add0de807a","Type":"ContainerStarted","Data":"b18ccffb91145e08ac8db356573ec56d33f8078102a4aa42fc4dd1e99bf609ce"} Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.563297 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.564535 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hrd59" event={"ID":"d3a80e27-d7fd-4b62-b5ae-9719c4f69655","Type":"ContainerStarted","Data":"56ce1e9f9b069100dd92fec13cd193b32855a04e11e85329a98fbb59fbd5df5b"} Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.567794 4624 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5q99v container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:5443/healthz\": dial tcp 10.217.0.19:5443: connect: connection refused" start-of-body= Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.567835 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" podUID="809bf602-1dba-4d74-8f71-18add0de807a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.19:5443/healthz\": dial tcp 10.217.0.19:5443: connect: connection refused" Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.581056 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp" event={"ID":"6a9627e3-628f-42ee-b6c7-e203097ad785","Type":"ContainerStarted","Data":"a8429c4f20f085ee331baaf608090d77b2b76de7446cb51fad37f4a183f0eb46"} Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.585917 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:19 crc kubenswrapper[4624]: E1008 14:25:19.586103 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:20.086073171 +0000 UTC m=+145.237008248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.586547 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:19 crc kubenswrapper[4624]: E1008 14:25:19.590741 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:20.090707366 +0000 UTC m=+145.241642443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.603657 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z" event={"ID":"2ca17841-28c4-4a05-995c-6324cea0ffbd","Type":"ContainerStarted","Data":"7a4831b0578beb98b4971e73d2262168704072d19c74f60ed4b120bedd6bf12f"} Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.604698 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-kl87s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.604732 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kl87s" podUID="60fc8c95-75b8-4032-b407-c0b21022da37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.605092 4624 patch_prober.go:28] interesting pod/console-operator-58897d9998-gkmv8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.605150 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gkmv8" podUID="9f8eefe6-4147-465d-adda-f0ddf9530abb" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Oct 08 14:25:19 crc kubenswrapper[4624]: W1008 14:25:19.658235 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9eec41fc_17f3_4145_8eb5_77c45199ceaa.slice/crio-b9a7791b95fdd3568751f1b0310024123c1f80294d693ffd800a1d0e38bc638a WatchSource:0}: Error finding container b9a7791b95fdd3568751f1b0310024123c1f80294d693ffd800a1d0e38bc638a: Status 404 returned error can't find the container with id b9a7791b95fdd3568751f1b0310024123c1f80294d693ffd800a1d0e38bc638a Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.688015 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:19 crc kubenswrapper[4624]: E1008 14:25:19.689512 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:20.189469956 +0000 UTC m=+145.340405043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.740104 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wmls6"] Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.795899 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:19 crc kubenswrapper[4624]: E1008 14:25:19.799479 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:20.299457638 +0000 UTC m=+145.450392715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.859783 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-gkmv8" podStartSLOduration=123.859759087 podStartE2EDuration="2m3.859759087s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:19.85169999 +0000 UTC m=+145.002635067" watchObservedRunningTime="2025-10-08 14:25:19.859759087 +0000 UTC m=+145.010694174" Oct 08 14:25:19 crc kubenswrapper[4624]: W1008 14:25:19.876298 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66542923_c777_4a99_a3a7_9ab975b8b0c3.slice/crio-18ee2e0e117847880bcd935ef9543a9e08991730feff008da457e2eb553d3cba WatchSource:0}: Error finding container 18ee2e0e117847880bcd935ef9543a9e08991730feff008da457e2eb553d3cba: Status 404 returned error can't find the container with id 18ee2e0e117847880bcd935ef9543a9e08991730feff008da457e2eb553d3cba Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.878690 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx"] Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.886796 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pxc8x"] Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.893173 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-rgn2q" podStartSLOduration=123.893138663 podStartE2EDuration="2m3.893138663s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:19.888993351 +0000 UTC m=+145.039928428" watchObservedRunningTime="2025-10-08 14:25:19.893138663 +0000 UTC m=+145.044073740" Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.899994 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:19 crc kubenswrapper[4624]: E1008 14:25:19.901025 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:20.401005254 +0000 UTC m=+145.551940331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:19 crc kubenswrapper[4624]: I1008 14:25:19.964673 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt"] Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.002272 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:20 crc kubenswrapper[4624]: E1008 14:25:20.004002 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:20.503989738 +0000 UTC m=+145.654924815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.050722 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" podStartSLOduration=124.050703332 podStartE2EDuration="2m4.050703332s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:20.047525516 +0000 UTC m=+145.198460593" watchObservedRunningTime="2025-10-08 14:25:20.050703332 +0000 UTC m=+145.201638409" Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.051129 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-xbrc2" podStartSLOduration=124.051124033 podStartE2EDuration="2m4.051124033s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:20.016547735 +0000 UTC m=+145.167482812" watchObservedRunningTime="2025-10-08 14:25:20.051124033 +0000 UTC m=+145.202059110" Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.090841 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" podStartSLOduration=124.090822838 podStartE2EDuration="2m4.090822838s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:20.089290887 +0000 UTC m=+145.240225964" watchObservedRunningTime="2025-10-08 14:25:20.090822838 +0000 UTC m=+145.241757915" Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.103590 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:20 crc kubenswrapper[4624]: E1008 14:25:20.103910 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:20.603891139 +0000 UTC m=+145.754826206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.118148 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.141094 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" podStartSLOduration=124.141073627 podStartE2EDuration="2m4.141073627s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:20.138757695 +0000 UTC m=+145.289692782" watchObservedRunningTime="2025-10-08 14:25:20.141073627 +0000 UTC m=+145.292008724" Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.176996 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rtl4z" podStartSLOduration=124.176980871 podStartE2EDuration="2m4.176980871s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:20.175285525 +0000 UTC m=+145.326220622" watchObservedRunningTime="2025-10-08 14:25:20.176980871 +0000 UTC m=+145.327915948" Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.217443 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-px79m"] Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.242238 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:20 crc kubenswrapper[4624]: E1008 14:25:20.243058 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:20.743045104 +0000 UTC m=+145.893980181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.316004 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" podStartSLOduration=124.315982861 podStartE2EDuration="2m4.315982861s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:20.275840054 +0000 UTC m=+145.426775131" watchObservedRunningTime="2025-10-08 14:25:20.315982861 +0000 UTC m=+145.466917938" Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.339501 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll" podStartSLOduration=124.339470651 podStartE2EDuration="2m4.339470651s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:20.315525208 +0000 UTC m=+145.466460285" watchObservedRunningTime="2025-10-08 14:25:20.339470651 +0000 UTC m=+145.490405728" Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.348369 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:20 crc kubenswrapper[4624]: E1008 14:25:20.348842 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:20.848822122 +0000 UTC m=+145.999757199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.373548 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-khxjg"] Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.450144 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:20 crc kubenswrapper[4624]: E1008 14:25:20.450546 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:20.950531432 +0000 UTC m=+146.101466509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.558165 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:20 crc kubenswrapper[4624]: E1008 14:25:20.558565 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:21.058547331 +0000 UTC m=+146.209482408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:20 crc kubenswrapper[4624]: W1008 14:25:20.584765 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09662517_7f1a_428a_ae36_ba154bede835.slice/crio-748bc544ddcb657202494ba21614697a063bae1d7ffa0d1b6f9b7817d1d5d2d5 WatchSource:0}: Error finding container 748bc544ddcb657202494ba21614697a063bae1d7ffa0d1b6f9b7817d1d5d2d5: Status 404 returned error can't find the container with id 748bc544ddcb657202494ba21614697a063bae1d7ffa0d1b6f9b7817d1d5d2d5 Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.614866 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hbk6h" event={"ID":"9eec41fc-17f3-4145-8eb5-77c45199ceaa","Type":"ContainerStarted","Data":"b9a7791b95fdd3568751f1b0310024123c1f80294d693ffd800a1d0e38bc638a"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.616773 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4rhxp" event={"ID":"2a612284-a8f8-4887-b09d-c99028cf34be","Type":"ContainerStarted","Data":"7c21fa77bb8512723ddbf5c3f600965c5eff187924782738baa165a629131be4"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.619003 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-ddpjz" event={"ID":"fba88879-d200-4b89-a22c-5b83be19012b","Type":"ContainerStarted","Data":"e313cd61d735777f3906c5dbbe598d5b44ef767d8c0362e9d4a73f538d2e0832"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.620121 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wmls6" event={"ID":"66542923-c777-4a99-a3a7-9ab975b8b0c3","Type":"ContainerStarted","Data":"18ee2e0e117847880bcd935ef9543a9e08991730feff008da457e2eb553d3cba"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.621858 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pxc8x" event={"ID":"e560999a-1afb-48ee-a544-df38bd7b2fa6","Type":"ContainerStarted","Data":"be277258e0d50136a8327dd664ef45dcd5480a3989a0e5778806ec4d1473a7ca"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.623603 4624 generic.go:334] "Generic (PLEG): container finished" podID="df49d2ff-640d-4cf1-812c-8b7275df6292" containerID="efb824628c8fba714eaba3bc49c12e6b2226875b0272ccbadfa2e21a89b0473f" exitCode=0 Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.623671 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" event={"ID":"df49d2ff-640d-4cf1-812c-8b7275df6292","Type":"ContainerDied","Data":"efb824628c8fba714eaba3bc49c12e6b2226875b0272ccbadfa2e21a89b0473f"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.626904 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nzqkt" event={"ID":"794c1021-7b0b-4f44-b422-9a17a1c969c7","Type":"ContainerStarted","Data":"ec1e4650f1766fb975dcd8f7bea2cd149d3ad90e3b2ada80fec63f7f37edfce0"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.632415 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xxrs" event={"ID":"2b04ef3d-c348-4c0a-8c43-ba2e41a1695c","Type":"ContainerStarted","Data":"2ef368f66d326001da6fbf25b0bdd53fdc7b5846051d32fcba40d3f535510439"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.642959 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" event={"ID":"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31","Type":"ContainerStarted","Data":"7310196460ddaeafe9680b4e823e427d26b9408d171ecba97ebc415b7d1cd556"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.665625 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:20 crc kubenswrapper[4624]: E1008 14:25:20.667348 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:21.167337201 +0000 UTC m=+146.318272278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.677961 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hrd59" event={"ID":"d3a80e27-d7fd-4b62-b5ae-9719c4f69655","Type":"ContainerStarted","Data":"2eb7292b6116346c494aec1f58ca5c17deb5c206956f21e654b618b1c68ec36d"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.687803 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fbw8l" event={"ID":"d96a6e0b-5e82-48ad-b930-b7b82620830a","Type":"ContainerStarted","Data":"e2f8f417e63d4e40b2b4f47b3e27bcb333037e9b5ed6d3906fa186bab98bfc69"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.712933 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" event={"ID":"72ec1d6f-5923-45ed-a5a8-ad8a268faca5","Type":"ContainerStarted","Data":"d15e2b563c9f9625b39276562b47cabaf18dbcb197b4d117017307dadcf161fe"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.712988 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.713897 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.718034 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.718077 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.728145 4624 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-h9k57 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.728212 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" podUID="72ec1d6f-5923-45ed-a5a8-ad8a268faca5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.742355 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-fbw8l" podStartSLOduration=124.742334774 podStartE2EDuration="2m4.742334774s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:20.741278055 +0000 UTC m=+145.892213142" watchObservedRunningTime="2025-10-08 14:25:20.742334774 +0000 UTC m=+145.893269851" Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.742854 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hrd59" podStartSLOduration=124.742846938 podStartE2EDuration="2m4.742846938s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:20.700982064 +0000 UTC m=+145.851917141" watchObservedRunningTime="2025-10-08 14:25:20.742846938 +0000 UTC m=+145.893782015" Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.745261 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx" event={"ID":"77416f3b-337a-43d7-9c77-2b2e86d59d45","Type":"ContainerStarted","Data":"4db0f177040f790ff442a87bec4d00080856109102f3fd9ff12ae71a130852a1"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.765743 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" event={"ID":"04aba85b-36e3-4fb4-9915-c5dcae83bb0f","Type":"ContainerStarted","Data":"6792fc6b41742bab50ae5b3571d90c9fe3e042d5952f23a38230ea092468e883"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.766616 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:20 crc kubenswrapper[4624]: E1008 14:25:20.768156 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:21.268135046 +0000 UTC m=+146.419070123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.779442 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" podStartSLOduration=124.779421999 podStartE2EDuration="2m4.779421999s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:20.779265235 +0000 UTC m=+145.930200312" watchObservedRunningTime="2025-10-08 14:25:20.779421999 +0000 UTC m=+145.930357086" Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.790818 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-w4d7d" event={"ID":"30404ab5-dce0-4514-bcc3-5ee7b2f6afb9","Type":"ContainerStarted","Data":"b6489ff7d02118fca3e7a3b9d8ea52252e51a5ed87931ac6639004758d699b4a"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.796198 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4jmhk" event={"ID":"c60f1300-c5d8-4993-8332-e2ca6008c358","Type":"ContainerStarted","Data":"c7a2c2bca9c05fe4db17b2e8dd4983204f2d6175d040df878f8e968f8f356338"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.800958 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-px79m" event={"ID":"bf58b563-ce32-45f3-8d35-f3c71f49f8fe","Type":"ContainerStarted","Data":"52d3cd4ef0bea9a7909f7c14034da2a192efc5dad8eb7e6fcb0ee4b0d7c1c404"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.803461 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-97pqq" event={"ID":"c00e7c1b-2054-427e-ad58-03b97f0ceb83","Type":"ContainerStarted","Data":"00b250912ae2b2b7bf61bcec941e36388954094700937ca909d31b2389d1f769"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.814304 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" event={"ID":"100b758b-a285-49a0-a5ec-0b565dce5e1a","Type":"ContainerStarted","Data":"ae2e958c702800cb99fa1ba21e829b36a15ce7e02395e4cf9b3323b8b6492f72"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.830434 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w755s" event={"ID":"c95a7fea-e442-4f6f-b6df-0b8f62c5a13f","Type":"ContainerStarted","Data":"971f833f5e36c1231d462fafa541251eccdaac80dc364660439985da4b9d0928"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.868783 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:20 crc kubenswrapper[4624]: E1008 14:25:20.869136 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:21.369119417 +0000 UTC m=+146.520054494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.876081 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-khxjg" event={"ID":"09662517-7f1a-428a-ae36-ba154bede835","Type":"ContainerStarted","Data":"748bc544ddcb657202494ba21614697a063bae1d7ffa0d1b6f9b7817d1d5d2d5"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.889595 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9k2sh" event={"ID":"a1419c0b-876f-4c86-92ef-ce9d66f8849c","Type":"ContainerStarted","Data":"c001bfbb58aaef63ddff50c00f2e71d8cf3b231619383e174ca4ecff2d376c13"} Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.893097 4624 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5q99v container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:5443/healthz\": dial tcp 10.217.0.19:5443: connect: connection refused" start-of-body= Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.893158 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" podUID="809bf602-1dba-4d74-8f71-18add0de807a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.19:5443/healthz\": dial tcp 10.217.0.19:5443: connect: connection refused" Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.895491 4624 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qqw9b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.895546 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" podUID="ebee26eb-b32e-459d-b4d2-7a36a325f08b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.969373 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:20 crc kubenswrapper[4624]: E1008 14:25:20.969613 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:21.469593443 +0000 UTC m=+146.620528520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:20 crc kubenswrapper[4624]: I1008 14:25:20.972847 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:20 crc kubenswrapper[4624]: E1008 14:25:20.973522 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:21.473508938 +0000 UTC m=+146.624444015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.074482 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:21 crc kubenswrapper[4624]: E1008 14:25:21.074652 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:21.574623822 +0000 UTC m=+146.725558899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.074845 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:21 crc kubenswrapper[4624]: E1008 14:25:21.075291 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:21.57528248 +0000 UTC m=+146.726217557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.181850 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:21 crc kubenswrapper[4624]: E1008 14:25:21.181930 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:21.681912362 +0000 UTC m=+146.832847439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.182175 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:21 crc kubenswrapper[4624]: E1008 14:25:21.182474 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:21.682463347 +0000 UTC m=+146.833398424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.283101 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:21 crc kubenswrapper[4624]: E1008 14:25:21.283361 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:21.783309223 +0000 UTC m=+146.934244310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.385801 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:21 crc kubenswrapper[4624]: E1008 14:25:21.386219 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:21.886203475 +0000 UTC m=+147.037138552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.487303 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:21 crc kubenswrapper[4624]: E1008 14:25:21.487514 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:21.987483703 +0000 UTC m=+147.138418790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.487837 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:21 crc kubenswrapper[4624]: E1008 14:25:21.488211 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:21.988202793 +0000 UTC m=+147.139137870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.596644 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:21 crc kubenswrapper[4624]: E1008 14:25:21.597182 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:22.097162837 +0000 UTC m=+147.248097914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.698373 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:21 crc kubenswrapper[4624]: E1008 14:25:21.698914 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:22.198892077 +0000 UTC m=+147.349827194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.717027 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.717072 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.800001 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:21 crc kubenswrapper[4624]: E1008 14:25:21.800237 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:22.300181596 +0000 UTC m=+147.451116673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.800692 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:21 crc kubenswrapper[4624]: E1008 14:25:21.801095 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:22.30108348 +0000 UTC m=+147.452018557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.832061 4624 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ddrvp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.832122 4624 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ddrvp container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.832147 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" podUID="42ae4a09-81ca-465e-85e7-38a09497805c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.832168 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" podUID="42ae4a09-81ca-465e-85e7-38a09497805c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.894745 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp" event={"ID":"6a9627e3-628f-42ee-b6c7-e203097ad785","Type":"ContainerStarted","Data":"515d53d8dc01cc1b583751a1f12abf10007b8c22c4f747929eeaa09e59c51aa7"} Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.896799 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xxrs" event={"ID":"2b04ef3d-c348-4c0a-8c43-ba2e41a1695c","Type":"ContainerStarted","Data":"f6172e03857c5f9d7c0ef0beb2ef130bfbe5654e4204535e6fc0f00a64e4fcd4"} Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.898603 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4rhxp" event={"ID":"2a612284-a8f8-4887-b09d-c99028cf34be","Type":"ContainerStarted","Data":"286a0699e389982e49523bd7764de14fd7d70dc7c81de5a67ac2495279fb58a6"} Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.901319 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:21 crc kubenswrapper[4624]: E1008 14:25:21.901513 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:22.401485045 +0000 UTC m=+147.552420132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.901603 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.901794 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hbk6h" event={"ID":"9eec41fc-17f3-4145-8eb5-77c45199ceaa","Type":"ContainerStarted","Data":"c292e4d130906d2e50d4e9277bcac8dac13da24727ab16a9196c828a9f19016f"} Oct 08 14:25:21 crc kubenswrapper[4624]: E1008 14:25:21.901952 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:22.401940287 +0000 UTC m=+147.552875364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.903235 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-ddpjz" event={"ID":"fba88879-d200-4b89-a22c-5b83be19012b","Type":"ContainerStarted","Data":"578946a3cadaf2d0a78b6556f1104915dbfc30ece33f4603a28a5d0f07b572a0"} Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.905782 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9k2sh" event={"ID":"a1419c0b-876f-4c86-92ef-ce9d66f8849c","Type":"ContainerStarted","Data":"3c90fada88af8005d7ba80aaf1e41135092dcb3ca8f3bdb34c24894fc971b981"} Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.908469 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx" event={"ID":"77416f3b-337a-43d7-9c77-2b2e86d59d45","Type":"ContainerStarted","Data":"c87099908989e3f42903f634c9c45987b62997a74d938f4f6d5ab0982c6190d3"} Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.913475 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" event={"ID":"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31","Type":"ContainerStarted","Data":"fb3f7b70848b0f580520d456475cf7cd9559a0de6dce62cd54e624f314ad0444"} Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.914854 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b" event={"ID":"85911902-d86c-48de-b04f-e12b5885a05c","Type":"ContainerStarted","Data":"e11695b5d98ece4a20f06eebf15bfa1bea1cb34c37dbc4bd1e5bbc7f4c05dc35"} Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.919712 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5" event={"ID":"d1e4ee61-1c1c-41eb-82a7-42b6313d863c","Type":"ContainerStarted","Data":"5bdb666bacfbca78be97798308099ba9a3a4636a8c9e3e62d634bdce82538da7"} Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.927714 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4jmhk" event={"ID":"c60f1300-c5d8-4993-8332-e2ca6008c358","Type":"ContainerStarted","Data":"5f3ea61fecac90a4e56783d740aed0ed537a929f28efd17459210c2d6f5b139b"} Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.929892 4624 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qqw9b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.932334 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" podUID="ebee26eb-b32e-459d-b4d2-7a36a325f08b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.930096 4624 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-h9k57 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.932962 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" podUID="72ec1d6f-5923-45ed-a5a8-ad8a268faca5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.930381 4624 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ddrvp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.933206 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" podUID="42ae4a09-81ca-465e-85e7-38a09497805c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.934694 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n48bp" podStartSLOduration=125.934680546 podStartE2EDuration="2m5.934680546s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:21.924561274 +0000 UTC m=+147.075496361" watchObservedRunningTime="2025-10-08 14:25:21.934680546 +0000 UTC m=+147.085615623" Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.956395 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4bx6b" podStartSLOduration=125.956379178 podStartE2EDuration="2m5.956379178s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:21.954557509 +0000 UTC m=+147.105492586" watchObservedRunningTime="2025-10-08 14:25:21.956379178 +0000 UTC m=+147.107314245" Oct 08 14:25:21 crc kubenswrapper[4624]: I1008 14:25:21.991880 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7fps5" podStartSLOduration=126.991802939 podStartE2EDuration="2m6.991802939s" podCreationTimestamp="2025-10-08 14:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:21.989533918 +0000 UTC m=+147.140468995" watchObservedRunningTime="2025-10-08 14:25:21.991802939 +0000 UTC m=+147.142738016" Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.003233 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:22 crc kubenswrapper[4624]: E1008 14:25:22.003347 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:22.503316768 +0000 UTC m=+147.654251845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.003831 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:22 crc kubenswrapper[4624]: E1008 14:25:22.008095 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:22.508076436 +0000 UTC m=+147.659011603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.095882 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w755s" podStartSLOduration=126.095862032 podStartE2EDuration="2m6.095862032s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:22.093039036 +0000 UTC m=+147.243974113" watchObservedRunningTime="2025-10-08 14:25:22.095862032 +0000 UTC m=+147.246797109" Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.105074 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-ddpjz" podStartSLOduration=126.105048118 podStartE2EDuration="2m6.105048118s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:22.062784684 +0000 UTC m=+147.213719791" watchObservedRunningTime="2025-10-08 14:25:22.105048118 +0000 UTC m=+147.255983195" Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.107917 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:22 crc kubenswrapper[4624]: E1008 14:25:22.108328 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:22.608310986 +0000 UTC m=+147.759246073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.134898 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nzqkt" podStartSLOduration=126.134880589 podStartE2EDuration="2m6.134880589s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:22.134477928 +0000 UTC m=+147.285413025" watchObservedRunningTime="2025-10-08 14:25:22.134880589 +0000 UTC m=+147.285815666" Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.209316 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:22 crc kubenswrapper[4624]: E1008 14:25:22.209608 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:22.709594094 +0000 UTC m=+147.860529171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.310962 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:22 crc kubenswrapper[4624]: E1008 14:25:22.311163 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:22.81113274 +0000 UTC m=+147.962067817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.311219 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:22 crc kubenswrapper[4624]: E1008 14:25:22.311599 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:22.811584772 +0000 UTC m=+147.962519929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.412539 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:22 crc kubenswrapper[4624]: E1008 14:25:22.413090 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:22.913068696 +0000 UTC m=+148.064003773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.514034 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:22 crc kubenswrapper[4624]: E1008 14:25:22.514774 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:23.014754575 +0000 UTC m=+148.165689722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.588973 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qqw9b"] Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.615269 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:22 crc kubenswrapper[4624]: E1008 14:25:22.615585 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:23.11555387 +0000 UTC m=+148.266488947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.615980 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:22 crc kubenswrapper[4624]: E1008 14:25:22.616475 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:23.116460354 +0000 UTC m=+148.267395431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.713754 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.713800 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.716902 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:22 crc kubenswrapper[4624]: E1008 14:25:22.717056 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:23.217032874 +0000 UTC m=+148.367967951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.717568 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:22 crc kubenswrapper[4624]: E1008 14:25:22.717988 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:23.217969629 +0000 UTC m=+148.368904706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.820317 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:22 crc kubenswrapper[4624]: E1008 14:25:22.820881 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:23.32085549 +0000 UTC m=+148.471790567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.922404 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:22 crc kubenswrapper[4624]: E1008 14:25:22.922802 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:23.422782366 +0000 UTC m=+148.573717523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.962598 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-px79m" event={"ID":"bf58b563-ce32-45f3-8d35-f3c71f49f8fe","Type":"ContainerStarted","Data":"539e37394d93284c87aadea50294a37a45452f570cc093151d5dcefa6385c2b4"} Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.963689 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4rhxp" event={"ID":"2a612284-a8f8-4887-b09d-c99028cf34be","Type":"ContainerStarted","Data":"8987731cc3837f4ee093c813e028b3d0488d0b922300a071ec863475c2304057"} Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.968377 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" event={"ID":"100b758b-a285-49a0-a5ec-0b565dce5e1a","Type":"ContainerStarted","Data":"efe665cdaa42c482b971ebcf5ccf4291a7fc381b90bc0d69121fe6884e6600b8"} Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.969977 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pxc8x" event={"ID":"e560999a-1afb-48ee-a544-df38bd7b2fa6","Type":"ContainerStarted","Data":"f9f22341d9b27cc45bd45545242af05e3b14b160e7585cc41a2b7c4c7f4bf213"} Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.971754 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" event={"ID":"df49d2ff-640d-4cf1-812c-8b7275df6292","Type":"ContainerStarted","Data":"594b357b0fe20b7f3147de98912a771a2880223520f586ccb7b30c73e5582413"} Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.986336 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" event={"ID":"04aba85b-36e3-4fb4-9915-c5dcae83bb0f","Type":"ContainerStarted","Data":"419b9de34c6593cd4037a3ab4568a8458d357f2ca69978a48c345c71d0747837"} Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.987029 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.994731 4624 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-76ddt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Oct 08 14:25:22 crc kubenswrapper[4624]: I1008 14:25:22.994789 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" podUID="04aba85b-36e3-4fb4-9915-c5dcae83bb0f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.023222 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:23 crc kubenswrapper[4624]: E1008 14:25:23.023349 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:23.523328005 +0000 UTC m=+148.674263082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.023485 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-khxjg" event={"ID":"09662517-7f1a-428a-ae36-ba154bede835","Type":"ContainerStarted","Data":"73e60578da41ba17a04699f76553202f496532af7ecd8ccd217ceeb1ebc68f59"} Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.023533 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.024105 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4rhxp" podStartSLOduration=127.024096065 podStartE2EDuration="2m7.024096065s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:23.023105189 +0000 UTC m=+148.174040266" watchObservedRunningTime="2025-10-08 14:25:23.024096065 +0000 UTC m=+148.175031142" Oct 08 14:25:23 crc kubenswrapper[4624]: E1008 14:25:23.024267 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:23.524252999 +0000 UTC m=+148.675188086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.050598 4624 generic.go:334] "Generic (PLEG): container finished" podID="df0cea7a-1f29-4af5-b8f1-2e1c8873610c" containerID="407252a5d7e1f7ee74cb8d86902251b8c725122f417b8798f79afee34f08f67a" exitCode=0 Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.050673 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll" event={"ID":"df0cea7a-1f29-4af5-b8f1-2e1c8873610c","Type":"ContainerDied","Data":"407252a5d7e1f7ee74cb8d86902251b8c725122f417b8798f79afee34f08f67a"} Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.052065 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-w4d7d" event={"ID":"30404ab5-dce0-4514-bcc3-5ee7b2f6afb9","Type":"ContainerStarted","Data":"80d6a37f0edd68331eb1c336ba227e414d6fd4934ecf76b617e229cb20e18951"} Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.053896 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-97pqq" event={"ID":"c00e7c1b-2054-427e-ad58-03b97f0ceb83","Type":"ContainerStarted","Data":"484e2707e5a60cbcdbbbe10052e333391ddc10e3ca2c5eb52539bfdd2a08aa3c"} Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.074927 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-pxc8x" podStartSLOduration=8.074910719 podStartE2EDuration="8.074910719s" podCreationTimestamp="2025-10-08 14:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:23.073790719 +0000 UTC m=+148.224725796" watchObservedRunningTime="2025-10-08 14:25:23.074910719 +0000 UTC m=+148.225845796" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.080394 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx" event={"ID":"77416f3b-337a-43d7-9c77-2b2e86d59d45","Type":"ContainerStarted","Data":"f663a1534eeee2151bfdb1f3d1af9a102c0df78759a7c0966109143baa87e7ef"} Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.090757 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" event={"ID":"3c901fd1-c23c-47f0-b44f-102a1965abd6","Type":"ContainerStarted","Data":"a7e2715ffa9e660472aadf80efc410a2b1447ddf539ef124ced37baac345e248"} Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.091164 4624 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-h9k57 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.091210 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" podUID="72ec1d6f-5923-45ed-a5a8-ad8a268faca5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.092046 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" podUID="ebee26eb-b32e-459d-b4d2-7a36a325f08b" containerName="controller-manager" containerID="cri-o://d6c6b13aec750ee7402349dd513194bce74f3cec9a4f18653fbd86bd7a0c631f" gracePeriod=30 Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.093454 4624 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qqw9b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.093491 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" podUID="ebee26eb-b32e-459d-b4d2-7a36a325f08b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.108920 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" podStartSLOduration=127.108898781 podStartE2EDuration="2m7.108898781s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:23.107449902 +0000 UTC m=+148.258384979" watchObservedRunningTime="2025-10-08 14:25:23.108898781 +0000 UTC m=+148.259833858" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.126133 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:23 crc kubenswrapper[4624]: E1008 14:25:23.127053 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:23.627037208 +0000 UTC m=+148.777972285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.191808 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-bbffn" podStartSLOduration=127.191789296 podStartE2EDuration="2m7.191789296s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:23.191440117 +0000 UTC m=+148.342375194" watchObservedRunningTime="2025-10-08 14:25:23.191789296 +0000 UTC m=+148.342724373" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.192786 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" podStartSLOduration=127.192779713 podStartE2EDuration="2m7.192779713s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:23.156047407 +0000 UTC m=+148.306982484" watchObservedRunningTime="2025-10-08 14:25:23.192779713 +0000 UTC m=+148.343714790" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.221482 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-w4d7d" podStartSLOduration=8.221464993 podStartE2EDuration="8.221464993s" podCreationTimestamp="2025-10-08 14:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:23.219980963 +0000 UTC m=+148.370916050" watchObservedRunningTime="2025-10-08 14:25:23.221464993 +0000 UTC m=+148.372400070" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.228246 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.228404 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.228595 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:25:23 crc kubenswrapper[4624]: E1008 14:25:23.231040 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:23.731016309 +0000 UTC m=+148.881951396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.233167 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.286708 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.331075 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.331435 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.331496 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:25:23 crc kubenswrapper[4624]: E1008 14:25:23.332255 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:23.832230765 +0000 UTC m=+148.983165902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.338366 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.343560 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.376750 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" podStartSLOduration=127.37673156 podStartE2EDuration="2m7.37673156s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:23.375161448 +0000 UTC m=+148.526096535" watchObservedRunningTime="2025-10-08 14:25:23.37673156 +0000 UTC m=+148.527666637" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.377102 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4jmhk" podStartSLOduration=127.37709821 podStartE2EDuration="2m7.37709821s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:23.320044478 +0000 UTC m=+148.470979555" watchObservedRunningTime="2025-10-08 14:25:23.37709821 +0000 UTC m=+148.528033287" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.403504 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-97pqq" podStartSLOduration=127.403485048 podStartE2EDuration="2m7.403485048s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:23.402144662 +0000 UTC m=+148.553079749" watchObservedRunningTime="2025-10-08 14:25:23.403485048 +0000 UTC m=+148.554420125" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.432775 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:23 crc kubenswrapper[4624]: E1008 14:25:23.433325 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:23.933308968 +0000 UTC m=+149.084244095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.483053 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.489773 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.498868 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.525592 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hbk6h" podStartSLOduration=127.525564344 podStartE2EDuration="2m7.525564344s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:23.470666351 +0000 UTC m=+148.621601428" watchObservedRunningTime="2025-10-08 14:25:23.525564344 +0000 UTC m=+148.676499421" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.538145 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:23 crc kubenswrapper[4624]: E1008 14:25:23.538423 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:24.038407939 +0000 UTC m=+149.189343016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.549868 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9k2sh" podStartSLOduration=127.549852116 podStartE2EDuration="2m7.549852116s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:23.514356424 +0000 UTC m=+148.665291511" watchObservedRunningTime="2025-10-08 14:25:23.549852116 +0000 UTC m=+148.700787193" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.612028 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kdcqx" podStartSLOduration=127.612009335 podStartE2EDuration="2m7.612009335s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:23.611187353 +0000 UTC m=+148.762122420" watchObservedRunningTime="2025-10-08 14:25:23.612009335 +0000 UTC m=+148.762944422" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.640328 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:23 crc kubenswrapper[4624]: E1008 14:25:23.640678 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:24.140663244 +0000 UTC m=+149.291598321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.741708 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:23 crc kubenswrapper[4624]: E1008 14:25:23.742085 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:24.242068805 +0000 UTC m=+149.393003882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.807690 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 14:25:23 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Oct 08 14:25:23 crc kubenswrapper[4624]: [+]process-running ok Oct 08 14:25:23 crc kubenswrapper[4624]: healthz check failed Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.808067 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.846303 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:23 crc kubenswrapper[4624]: E1008 14:25:23.846594 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:24.34658197 +0000 UTC m=+149.497517047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.948474 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:23 crc kubenswrapper[4624]: E1008 14:25:23.948653 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:24.448607358 +0000 UTC m=+149.599542435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:23 crc kubenswrapper[4624]: I1008 14:25:23.948749 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:23 crc kubenswrapper[4624]: E1008 14:25:23.949096 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:24.44908373 +0000 UTC m=+149.600018807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.049464 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:24 crc kubenswrapper[4624]: E1008 14:25:24.049897 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:24.549880236 +0000 UTC m=+149.700815313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.139260 4624 generic.go:334] "Generic (PLEG): container finished" podID="ebee26eb-b32e-459d-b4d2-7a36a325f08b" containerID="d6c6b13aec750ee7402349dd513194bce74f3cec9a4f18653fbd86bd7a0c631f" exitCode=0 Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.139538 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" event={"ID":"ebee26eb-b32e-459d-b4d2-7a36a325f08b","Type":"ContainerDied","Data":"d6c6b13aec750ee7402349dd513194bce74f3cec9a4f18653fbd86bd7a0c631f"} Oct 08 14:25:24 crc kubenswrapper[4624]: W1008 14:25:24.143139 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-09316fc87c0cfece6ce7af8ad2cc8d4c469c58ba370a78152eec0e65b8ff2c7f WatchSource:0}: Error finding container 09316fc87c0cfece6ce7af8ad2cc8d4c469c58ba370a78152eec0e65b8ff2c7f: Status 404 returned error can't find the container with id 09316fc87c0cfece6ce7af8ad2cc8d4c469c58ba370a78152eec0e65b8ff2c7f Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.151106 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:24 crc kubenswrapper[4624]: E1008 14:25:24.151555 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:24.651540204 +0000 UTC m=+149.802475281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.163472 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xxrs" event={"ID":"2b04ef3d-c348-4c0a-8c43-ba2e41a1695c","Type":"ContainerStarted","Data":"7919b4e0e02b9a691d281f13b40a2bdc3432c19f478662e7e3a9ec861fe2375d"} Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.164323 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xxrs" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.171914 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wmls6" event={"ID":"66542923-c777-4a99-a3a7-9ab975b8b0c3","Type":"ContainerStarted","Data":"54b38a0a3eb12b3f23485acba0fc5f2aad961ec484f9cb388b02e1f2d4b0e4e4"} Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.174109 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a6b0fe01f2d09d9e6ae65874e5bdfcdd02d639d57c87672a9a1a4f8649584d89"} Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.187717 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" event={"ID":"3c901fd1-c23c-47f0-b44f-102a1965abd6","Type":"ContainerStarted","Data":"737d0258d527e0704911bfc5e96a28262bfb7e6a281d76fe6848e7268ead7580"} Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.222600 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xxrs" podStartSLOduration=128.222584091 podStartE2EDuration="2m8.222584091s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:24.221565874 +0000 UTC m=+149.372500951" watchObservedRunningTime="2025-10-08 14:25:24.222584091 +0000 UTC m=+149.373519168" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.233078 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-px79m" event={"ID":"bf58b563-ce32-45f3-8d35-f3c71f49f8fe","Type":"ContainerStarted","Data":"51ea3d80335627f136ba1efc1d7d8c19b33b56bac46ef134d7d8129e4386a3df"} Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.233750 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-px79m" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.256300 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:24 crc kubenswrapper[4624]: E1008 14:25:24.257745 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:24.757721864 +0000 UTC m=+149.908656991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.273054 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-khxjg" event={"ID":"09662517-7f1a-428a-ae36-ba154bede835","Type":"ContainerStarted","Data":"35a39fe1b0926fc46189a829796b94d654e5129f5af0af807108a0f6790d08db"} Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.275544 4624 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-76ddt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.275583 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" podUID="04aba85b-36e3-4fb4-9915-c5dcae83bb0f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.308838 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" podStartSLOduration=128.308818556 podStartE2EDuration="2m8.308818556s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:24.279423977 +0000 UTC m=+149.430359084" watchObservedRunningTime="2025-10-08 14:25:24.308818556 +0000 UTC m=+149.459753623" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.329686 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-khxjg" podStartSLOduration=128.329670175 podStartE2EDuration="2m8.329670175s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:24.322084722 +0000 UTC m=+149.473019819" watchObservedRunningTime="2025-10-08 14:25:24.329670175 +0000 UTC m=+149.480605242" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.361162 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:24 crc kubenswrapper[4624]: E1008 14:25:24.369184 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:24.869168535 +0000 UTC m=+150.020103612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.396668 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-px79m" podStartSLOduration=10.396653093 podStartE2EDuration="10.396653093s" podCreationTimestamp="2025-10-08 14:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:24.396309354 +0000 UTC m=+149.547244431" watchObservedRunningTime="2025-10-08 14:25:24.396653093 +0000 UTC m=+149.547588170" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.464508 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:24 crc kubenswrapper[4624]: E1008 14:25:24.479807 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:24.979782664 +0000 UTC m=+150.130717741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.571671 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:24 crc kubenswrapper[4624]: E1008 14:25:24.572089 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:25.072073961 +0000 UTC m=+150.223009038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.575035 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.613349 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tw5tg"] Oct 08 14:25:24 crc kubenswrapper[4624]: E1008 14:25:24.613566 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebee26eb-b32e-459d-b4d2-7a36a325f08b" containerName="controller-manager" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.613581 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebee26eb-b32e-459d-b4d2-7a36a325f08b" containerName="controller-manager" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.613726 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebee26eb-b32e-459d-b4d2-7a36a325f08b" containerName="controller-manager" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.614157 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.632347 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tw5tg"] Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.675127 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebee26eb-b32e-459d-b4d2-7a36a325f08b-proxy-ca-bundles\") pod \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.675261 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.675299 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k6tk\" (UniqueName: \"kubernetes.io/projected/ebee26eb-b32e-459d-b4d2-7a36a325f08b-kube-api-access-5k6tk\") pod \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.675319 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebee26eb-b32e-459d-b4d2-7a36a325f08b-serving-cert\") pod \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.675370 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebee26eb-b32e-459d-b4d2-7a36a325f08b-client-ca\") pod \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.675412 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebee26eb-b32e-459d-b4d2-7a36a325f08b-config\") pod \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\" (UID: \"ebee26eb-b32e-459d-b4d2-7a36a325f08b\") " Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.675522 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98001614-6da0-4175-854a-d9af45077799-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tw5tg\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.675577 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98001614-6da0-4175-854a-d9af45077799-config\") pod \"controller-manager-879f6c89f-tw5tg\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.675593 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7mjm\" (UniqueName: \"kubernetes.io/projected/98001614-6da0-4175-854a-d9af45077799-kube-api-access-k7mjm\") pod \"controller-manager-879f6c89f-tw5tg\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.675656 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98001614-6da0-4175-854a-d9af45077799-client-ca\") pod \"controller-manager-879f6c89f-tw5tg\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.675703 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98001614-6da0-4175-854a-d9af45077799-serving-cert\") pod \"controller-manager-879f6c89f-tw5tg\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.676538 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebee26eb-b32e-459d-b4d2-7a36a325f08b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ebee26eb-b32e-459d-b4d2-7a36a325f08b" (UID: "ebee26eb-b32e-459d-b4d2-7a36a325f08b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.676615 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebee26eb-b32e-459d-b4d2-7a36a325f08b-config" (OuterVolumeSpecName: "config") pod "ebee26eb-b32e-459d-b4d2-7a36a325f08b" (UID: "ebee26eb-b32e-459d-b4d2-7a36a325f08b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:25:24 crc kubenswrapper[4624]: E1008 14:25:24.680833 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:25.180805269 +0000 UTC m=+150.331740346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.681437 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebee26eb-b32e-459d-b4d2-7a36a325f08b-client-ca" (OuterVolumeSpecName: "client-ca") pod "ebee26eb-b32e-459d-b4d2-7a36a325f08b" (UID: "ebee26eb-b32e-459d-b4d2-7a36a325f08b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.683900 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebee26eb-b32e-459d-b4d2-7a36a325f08b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ebee26eb-b32e-459d-b4d2-7a36a325f08b" (UID: "ebee26eb-b32e-459d-b4d2-7a36a325f08b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.684956 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebee26eb-b32e-459d-b4d2-7a36a325f08b-kube-api-access-5k6tk" (OuterVolumeSpecName: "kube-api-access-5k6tk") pod "ebee26eb-b32e-459d-b4d2-7a36a325f08b" (UID: "ebee26eb-b32e-459d-b4d2-7a36a325f08b"). InnerVolumeSpecName "kube-api-access-5k6tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.717881 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 14:25:24 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Oct 08 14:25:24 crc kubenswrapper[4624]: [+]process-running ok Oct 08 14:25:24 crc kubenswrapper[4624]: healthz check failed Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.717934 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.777197 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98001614-6da0-4175-854a-d9af45077799-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tw5tg\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.777599 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98001614-6da0-4175-854a-d9af45077799-config\") pod \"controller-manager-879f6c89f-tw5tg\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.777621 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7mjm\" (UniqueName: \"kubernetes.io/projected/98001614-6da0-4175-854a-d9af45077799-kube-api-access-k7mjm\") pod \"controller-manager-879f6c89f-tw5tg\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.777662 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98001614-6da0-4175-854a-d9af45077799-client-ca\") pod \"controller-manager-879f6c89f-tw5tg\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.777690 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.777708 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98001614-6da0-4175-854a-d9af45077799-serving-cert\") pod \"controller-manager-879f6c89f-tw5tg\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.777765 4624 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebee26eb-b32e-459d-b4d2-7a36a325f08b-client-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.777777 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebee26eb-b32e-459d-b4d2-7a36a325f08b-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.777786 4624 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebee26eb-b32e-459d-b4d2-7a36a325f08b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.777794 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k6tk\" (UniqueName: \"kubernetes.io/projected/ebee26eb-b32e-459d-b4d2-7a36a325f08b-kube-api-access-5k6tk\") on node \"crc\" DevicePath \"\"" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.777803 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebee26eb-b32e-459d-b4d2-7a36a325f08b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:25:24 crc kubenswrapper[4624]: E1008 14:25:24.778279 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:25.278263195 +0000 UTC m=+150.429198272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.778548 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98001614-6da0-4175-854a-d9af45077799-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tw5tg\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.778571 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98001614-6da0-4175-854a-d9af45077799-client-ca\") pod \"controller-manager-879f6c89f-tw5tg\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.779445 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98001614-6da0-4175-854a-d9af45077799-config\") pod \"controller-manager-879f6c89f-tw5tg\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.783345 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98001614-6da0-4175-854a-d9af45077799-serving-cert\") pod \"controller-manager-879f6c89f-tw5tg\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.811367 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7mjm\" (UniqueName: \"kubernetes.io/projected/98001614-6da0-4175-854a-d9af45077799-kube-api-access-k7mjm\") pod \"controller-manager-879f6c89f-tw5tg\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.833949 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.846800 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ddrvp" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.878249 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:24 crc kubenswrapper[4624]: E1008 14:25:24.878426 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:25.378399293 +0000 UTC m=+150.529334370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.878734 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df0cea7a-1f29-4af5-b8f1-2e1c8873610c-secret-volume\") pod \"df0cea7a-1f29-4af5-b8f1-2e1c8873610c\" (UID: \"df0cea7a-1f29-4af5-b8f1-2e1c8873610c\") " Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.878882 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df0cea7a-1f29-4af5-b8f1-2e1c8873610c-config-volume\") pod \"df0cea7a-1f29-4af5-b8f1-2e1c8873610c\" (UID: \"df0cea7a-1f29-4af5-b8f1-2e1c8873610c\") " Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.878999 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tx75\" (UniqueName: \"kubernetes.io/projected/df0cea7a-1f29-4af5-b8f1-2e1c8873610c-kube-api-access-7tx75\") pod \"df0cea7a-1f29-4af5-b8f1-2e1c8873610c\" (UID: \"df0cea7a-1f29-4af5-b8f1-2e1c8873610c\") " Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.879224 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0cea7a-1f29-4af5-b8f1-2e1c8873610c-config-volume" (OuterVolumeSpecName: "config-volume") pod "df0cea7a-1f29-4af5-b8f1-2e1c8873610c" (UID: "df0cea7a-1f29-4af5-b8f1-2e1c8873610c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.879402 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.879586 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df0cea7a-1f29-4af5-b8f1-2e1c8873610c-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 14:25:24 crc kubenswrapper[4624]: E1008 14:25:24.879725 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:25.379716188 +0000 UTC m=+150.530651265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.886033 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df0cea7a-1f29-4af5-b8f1-2e1c8873610c-kube-api-access-7tx75" (OuterVolumeSpecName: "kube-api-access-7tx75") pod "df0cea7a-1f29-4af5-b8f1-2e1c8873610c" (UID: "df0cea7a-1f29-4af5-b8f1-2e1c8873610c"). InnerVolumeSpecName "kube-api-access-7tx75". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.886292 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0cea7a-1f29-4af5-b8f1-2e1c8873610c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "df0cea7a-1f29-4af5-b8f1-2e1c8873610c" (UID: "df0cea7a-1f29-4af5-b8f1-2e1c8873610c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.930253 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.981121 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:24 crc kubenswrapper[4624]: E1008 14:25:24.981306 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:25.481278584 +0000 UTC m=+150.632213661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.981999 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.982319 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df0cea7a-1f29-4af5-b8f1-2e1c8873610c-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 14:25:24 crc kubenswrapper[4624]: I1008 14:25:24.982444 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tx75\" (UniqueName: \"kubernetes.io/projected/df0cea7a-1f29-4af5-b8f1-2e1c8873610c-kube-api-access-7tx75\") on node \"crc\" DevicePath \"\"" Oct 08 14:25:24 crc kubenswrapper[4624]: E1008 14:25:24.983106 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:25.483092643 +0000 UTC m=+150.634027790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.084097 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:25 crc kubenswrapper[4624]: E1008 14:25:25.084216 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:25.584195896 +0000 UTC m=+150.735130973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.084375 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:25 crc kubenswrapper[4624]: E1008 14:25:25.084730 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:25.58472067 +0000 UTC m=+150.735655747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.186139 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:25 crc kubenswrapper[4624]: E1008 14:25:25.186797 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:25.68677541 +0000 UTC m=+150.837710497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.186962 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:25 crc kubenswrapper[4624]: E1008 14:25:25.187315 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:25.687307704 +0000 UTC m=+150.838242781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.280716 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4b8dc7a15a54f8ce549fb8398fda8e40fb91abc136d647cab89b87f477a1ab75"} Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.280761 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"09316fc87c0cfece6ce7af8ad2cc8d4c469c58ba370a78152eec0e65b8ff2c7f"} Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.285279 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a0b97db92ad3c11af577d95fb5b7e64d709a6c22a861809ab4532143686de41f"} Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.285926 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.290005 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:25 crc kubenswrapper[4624]: E1008 14:25:25.290253 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:25.790232026 +0000 UTC m=+150.941167103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.290718 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:25 crc kubenswrapper[4624]: E1008 14:25:25.291082 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:25.791068779 +0000 UTC m=+150.942003846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.299375 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e9dcea93904245d54be979c698439bbd90c4eaeec9b4af0346dec7332f1acb68"} Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.299424 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"180e84fc13b68c34ba90cc3bb80705ee0ec2db6bb03cd6bd35192738e85952f3"} Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.299812 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.299849 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.301711 4624 patch_prober.go:28] interesting pod/console-f9d7485db-rgn2q container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.301750 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rgn2q" podUID="82676f42-aabb-4cee-b836-790a48dd9a2e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.304345 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll" event={"ID":"df0cea7a-1f29-4af5-b8f1-2e1c8873610c","Type":"ContainerDied","Data":"59715a8b1ab5988ae61925eaa419662fcd1eec402035e1c981ae442859b2f3de"} Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.304384 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59715a8b1ab5988ae61925eaa419662fcd1eec402035e1c981ae442859b2f3de" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.304454 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.306653 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.309073 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qqw9b" event={"ID":"ebee26eb-b32e-459d-b4d2-7a36a325f08b","Type":"ContainerDied","Data":"4079ed52546fd7ba0d5657c8e23169f27d8a4886d5fbf6b21e4a0e8bb167cb8f"} Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.309123 4624 scope.go:117] "RemoveContainer" containerID="d6c6b13aec750ee7402349dd513194bce74f3cec9a4f18653fbd86bd7a0c631f" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.312543 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-gkmv8" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.312668 4624 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-76ddt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.312709 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" podUID="04aba85b-36e3-4fb4-9915-c5dcae83bb0f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.394427 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:25 crc kubenswrapper[4624]: E1008 14:25:25.396301 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:25.896279372 +0000 UTC m=+151.047214449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.493999 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qqw9b"] Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.498361 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:25 crc kubenswrapper[4624]: E1008 14:25:25.498843 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:25.998828635 +0000 UTC m=+151.149763722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.502896 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qqw9b"] Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.553923 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-kl87s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.554225 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kl87s" podUID="60fc8c95-75b8-4032-b407-c0b21022da37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.553940 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-kl87s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.554721 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-kl87s" podUID="60fc8c95-75b8-4032-b407-c0b21022da37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.575607 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tw5tg"] Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.600313 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:25 crc kubenswrapper[4624]: E1008 14:25:25.600756 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:26.10073754 +0000 UTC m=+151.251672617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.687785 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wt4rs"] Oct 08 14:25:25 crc kubenswrapper[4624]: E1008 14:25:25.688001 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0cea7a-1f29-4af5-b8f1-2e1c8873610c" containerName="collect-profiles" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.688014 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0cea7a-1f29-4af5-b8f1-2e1c8873610c" containerName="collect-profiles" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.688121 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0cea7a-1f29-4af5-b8f1-2e1c8873610c" containerName="collect-profiles" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.688803 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wt4rs" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.691549 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.705117 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wt4rs"] Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.710230 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:25 crc kubenswrapper[4624]: E1008 14:25:25.710831 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:26.210814644 +0000 UTC m=+151.361749721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.720138 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 14:25:25 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Oct 08 14:25:25 crc kubenswrapper[4624]: [+]process-running ok Oct 08 14:25:25 crc kubenswrapper[4624]: healthz check failed Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.720199 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.811588 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.811816 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25349b6-d167-4846-884c-9c057b5c6491-catalog-content\") pod \"certified-operators-wt4rs\" (UID: \"a25349b6-d167-4846-884c-9c057b5c6491\") " pod="openshift-marketplace/certified-operators-wt4rs" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.811935 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79htk\" (UniqueName: \"kubernetes.io/projected/a25349b6-d167-4846-884c-9c057b5c6491-kube-api-access-79htk\") pod \"certified-operators-wt4rs\" (UID: \"a25349b6-d167-4846-884c-9c057b5c6491\") " pod="openshift-marketplace/certified-operators-wt4rs" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.811969 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25349b6-d167-4846-884c-9c057b5c6491-utilities\") pod \"certified-operators-wt4rs\" (UID: \"a25349b6-d167-4846-884c-9c057b5c6491\") " pod="openshift-marketplace/certified-operators-wt4rs" Oct 08 14:25:25 crc kubenswrapper[4624]: E1008 14:25:25.812157 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:26.312136884 +0000 UTC m=+151.463072141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.871606 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mnjz6"] Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.872842 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mnjz6" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.879456 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.886454 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mnjz6"] Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.913340 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78f43ab9-6c12-4d67-9c33-b186ebcef93c-catalog-content\") pod \"community-operators-mnjz6\" (UID: \"78f43ab9-6c12-4d67-9c33-b186ebcef93c\") " pod="openshift-marketplace/community-operators-mnjz6" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.913395 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25349b6-d167-4846-884c-9c057b5c6491-catalog-content\") pod \"certified-operators-wt4rs\" (UID: \"a25349b6-d167-4846-884c-9c057b5c6491\") " pod="openshift-marketplace/certified-operators-wt4rs" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.913420 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq5rs\" (UniqueName: \"kubernetes.io/projected/78f43ab9-6c12-4d67-9c33-b186ebcef93c-kube-api-access-tq5rs\") pod \"community-operators-mnjz6\" (UID: \"78f43ab9-6c12-4d67-9c33-b186ebcef93c\") " pod="openshift-marketplace/community-operators-mnjz6" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.913462 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79htk\" (UniqueName: \"kubernetes.io/projected/a25349b6-d167-4846-884c-9c057b5c6491-kube-api-access-79htk\") pod \"certified-operators-wt4rs\" (UID: \"a25349b6-d167-4846-884c-9c057b5c6491\") " pod="openshift-marketplace/certified-operators-wt4rs" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.913487 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25349b6-d167-4846-884c-9c057b5c6491-utilities\") pod \"certified-operators-wt4rs\" (UID: \"a25349b6-d167-4846-884c-9c057b5c6491\") " pod="openshift-marketplace/certified-operators-wt4rs" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.913515 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78f43ab9-6c12-4d67-9c33-b186ebcef93c-utilities\") pod \"community-operators-mnjz6\" (UID: \"78f43ab9-6c12-4d67-9c33-b186ebcef93c\") " pod="openshift-marketplace/community-operators-mnjz6" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.913541 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:25 crc kubenswrapper[4624]: E1008 14:25:25.913919 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:26.413903005 +0000 UTC m=+151.564838272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.914779 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25349b6-d167-4846-884c-9c057b5c6491-catalog-content\") pod \"certified-operators-wt4rs\" (UID: \"a25349b6-d167-4846-884c-9c057b5c6491\") " pod="openshift-marketplace/certified-operators-wt4rs" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.914808 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25349b6-d167-4846-884c-9c057b5c6491-utilities\") pod \"certified-operators-wt4rs\" (UID: \"a25349b6-d167-4846-884c-9c057b5c6491\") " pod="openshift-marketplace/certified-operators-wt4rs" Oct 08 14:25:25 crc kubenswrapper[4624]: I1008 14:25:25.934647 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79htk\" (UniqueName: \"kubernetes.io/projected/a25349b6-d167-4846-884c-9c057b5c6491-kube-api-access-79htk\") pod \"certified-operators-wt4rs\" (UID: \"a25349b6-d167-4846-884c-9c057b5c6491\") " pod="openshift-marketplace/certified-operators-wt4rs" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.014262 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wt4rs" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.014852 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:26 crc kubenswrapper[4624]: E1008 14:25:26.015025 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:26.515000169 +0000 UTC m=+151.665935236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.015228 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78f43ab9-6c12-4d67-9c33-b186ebcef93c-utilities\") pod \"community-operators-mnjz6\" (UID: \"78f43ab9-6c12-4d67-9c33-b186ebcef93c\") " pod="openshift-marketplace/community-operators-mnjz6" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.015348 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.015448 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78f43ab9-6c12-4d67-9c33-b186ebcef93c-catalog-content\") pod \"community-operators-mnjz6\" (UID: \"78f43ab9-6c12-4d67-9c33-b186ebcef93c\") " pod="openshift-marketplace/community-operators-mnjz6" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.015513 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq5rs\" (UniqueName: \"kubernetes.io/projected/78f43ab9-6c12-4d67-9c33-b186ebcef93c-kube-api-access-tq5rs\") pod \"community-operators-mnjz6\" (UID: \"78f43ab9-6c12-4d67-9c33-b186ebcef93c\") " pod="openshift-marketplace/community-operators-mnjz6" Oct 08 14:25:26 crc kubenswrapper[4624]: E1008 14:25:26.015663 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:26.515647926 +0000 UTC m=+151.666583003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.015717 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78f43ab9-6c12-4d67-9c33-b186ebcef93c-utilities\") pod \"community-operators-mnjz6\" (UID: \"78f43ab9-6c12-4d67-9c33-b186ebcef93c\") " pod="openshift-marketplace/community-operators-mnjz6" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.015942 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78f43ab9-6c12-4d67-9c33-b186ebcef93c-catalog-content\") pod \"community-operators-mnjz6\" (UID: \"78f43ab9-6c12-4d67-9c33-b186ebcef93c\") " pod="openshift-marketplace/community-operators-mnjz6" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.075852 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq5rs\" (UniqueName: \"kubernetes.io/projected/78f43ab9-6c12-4d67-9c33-b186ebcef93c-kube-api-access-tq5rs\") pod \"community-operators-mnjz6\" (UID: \"78f43ab9-6c12-4d67-9c33-b186ebcef93c\") " pod="openshift-marketplace/community-operators-mnjz6" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.095836 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qgsc5"] Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.096818 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qgsc5" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.116346 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:26 crc kubenswrapper[4624]: E1008 14:25:26.116842 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:26.616815091 +0000 UTC m=+151.767750168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.162105 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qgsc5"] Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.188603 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mnjz6" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.218027 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e17b3f85-b77b-4639-9993-66db89499fcf-utilities\") pod \"certified-operators-qgsc5\" (UID: \"e17b3f85-b77b-4639-9993-66db89499fcf\") " pod="openshift-marketplace/certified-operators-qgsc5" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.218089 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e17b3f85-b77b-4639-9993-66db89499fcf-catalog-content\") pod \"certified-operators-qgsc5\" (UID: \"e17b3f85-b77b-4639-9993-66db89499fcf\") " pod="openshift-marketplace/certified-operators-qgsc5" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.218138 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwnrj\" (UniqueName: \"kubernetes.io/projected/e17b3f85-b77b-4639-9993-66db89499fcf-kube-api-access-mwnrj\") pod \"certified-operators-qgsc5\" (UID: \"e17b3f85-b77b-4639-9993-66db89499fcf\") " pod="openshift-marketplace/certified-operators-qgsc5" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.218209 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:26 crc kubenswrapper[4624]: E1008 14:25:26.218692 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:26.718671805 +0000 UTC m=+151.869606882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.270724 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lbx9m"] Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.271802 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lbx9m" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.274842 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.298372 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lbx9m"] Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.300912 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.324140 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.324447 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273358a6-107e-4e31-aaff-1f825924ef2d-utilities\") pod \"community-operators-lbx9m\" (UID: \"273358a6-107e-4e31-aaff-1f825924ef2d\") " pod="openshift-marketplace/community-operators-lbx9m" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.324483 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwzzb\" (UniqueName: \"kubernetes.io/projected/273358a6-107e-4e31-aaff-1f825924ef2d-kube-api-access-fwzzb\") pod \"community-operators-lbx9m\" (UID: \"273358a6-107e-4e31-aaff-1f825924ef2d\") " pod="openshift-marketplace/community-operators-lbx9m" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.324541 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e17b3f85-b77b-4639-9993-66db89499fcf-utilities\") pod \"certified-operators-qgsc5\" (UID: \"e17b3f85-b77b-4639-9993-66db89499fcf\") " pod="openshift-marketplace/certified-operators-qgsc5" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.324591 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e17b3f85-b77b-4639-9993-66db89499fcf-catalog-content\") pod \"certified-operators-qgsc5\" (UID: \"e17b3f85-b77b-4639-9993-66db89499fcf\") " pod="openshift-marketplace/certified-operators-qgsc5" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.324627 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273358a6-107e-4e31-aaff-1f825924ef2d-catalog-content\") pod \"community-operators-lbx9m\" (UID: \"273358a6-107e-4e31-aaff-1f825924ef2d\") " pod="openshift-marketplace/community-operators-lbx9m" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.324684 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwnrj\" (UniqueName: \"kubernetes.io/projected/e17b3f85-b77b-4639-9993-66db89499fcf-kube-api-access-mwnrj\") pod \"certified-operators-qgsc5\" (UID: \"e17b3f85-b77b-4639-9993-66db89499fcf\") " pod="openshift-marketplace/certified-operators-qgsc5" Oct 08 14:25:26 crc kubenswrapper[4624]: E1008 14:25:26.324839 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:26.824807144 +0000 UTC m=+151.975742221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.325038 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.325301 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e17b3f85-b77b-4639-9993-66db89499fcf-catalog-content\") pod \"certified-operators-qgsc5\" (UID: \"e17b3f85-b77b-4639-9993-66db89499fcf\") " pod="openshift-marketplace/certified-operators-qgsc5" Oct 08 14:25:26 crc kubenswrapper[4624]: E1008 14:25:26.325838 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:26.825828391 +0000 UTC m=+151.976763468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.325936 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e17b3f85-b77b-4639-9993-66db89499fcf-utilities\") pod \"certified-operators-qgsc5\" (UID: \"e17b3f85-b77b-4639-9993-66db89499fcf\") " pod="openshift-marketplace/certified-operators-qgsc5" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.400450 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwnrj\" (UniqueName: \"kubernetes.io/projected/e17b3f85-b77b-4639-9993-66db89499fcf-kube-api-access-mwnrj\") pod \"certified-operators-qgsc5\" (UID: \"e17b3f85-b77b-4639-9993-66db89499fcf\") " pod="openshift-marketplace/certified-operators-qgsc5" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.415521 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" event={"ID":"98001614-6da0-4175-854a-d9af45077799","Type":"ContainerStarted","Data":"f448d24c4f28b42b4cc879c03d0945f5dca8c765a7ee06c2188c61153bb4a447"} Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.415986 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qgsc5" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.415826 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" event={"ID":"98001614-6da0-4175-854a-d9af45077799","Type":"ContainerStarted","Data":"81880ff0a467bbfdf6eb7462f58eade146892b9689d6784de79c10c49a910410"} Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.439703 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.440247 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273358a6-107e-4e31-aaff-1f825924ef2d-utilities\") pod \"community-operators-lbx9m\" (UID: \"273358a6-107e-4e31-aaff-1f825924ef2d\") " pod="openshift-marketplace/community-operators-lbx9m" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.440305 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwzzb\" (UniqueName: \"kubernetes.io/projected/273358a6-107e-4e31-aaff-1f825924ef2d-kube-api-access-fwzzb\") pod \"community-operators-lbx9m\" (UID: \"273358a6-107e-4e31-aaff-1f825924ef2d\") " pod="openshift-marketplace/community-operators-lbx9m" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.440397 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273358a6-107e-4e31-aaff-1f825924ef2d-catalog-content\") pod \"community-operators-lbx9m\" (UID: \"273358a6-107e-4e31-aaff-1f825924ef2d\") " pod="openshift-marketplace/community-operators-lbx9m" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.441115 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273358a6-107e-4e31-aaff-1f825924ef2d-catalog-content\") pod \"community-operators-lbx9m\" (UID: \"273358a6-107e-4e31-aaff-1f825924ef2d\") " pod="openshift-marketplace/community-operators-lbx9m" Oct 08 14:25:26 crc kubenswrapper[4624]: E1008 14:25:26.441745 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:26.941718472 +0000 UTC m=+152.092653549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.445247 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273358a6-107e-4e31-aaff-1f825924ef2d-utilities\") pod \"community-operators-lbx9m\" (UID: \"273358a6-107e-4e31-aaff-1f825924ef2d\") " pod="openshift-marketplace/community-operators-lbx9m" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.489865 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwzzb\" (UniqueName: \"kubernetes.io/projected/273358a6-107e-4e31-aaff-1f825924ef2d-kube-api-access-fwzzb\") pod \"community-operators-lbx9m\" (UID: \"273358a6-107e-4e31-aaff-1f825924ef2d\") " pod="openshift-marketplace/community-operators-lbx9m" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.500137 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" podStartSLOduration=4.500117129 podStartE2EDuration="4.500117129s" podCreationTimestamp="2025-10-08 14:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:26.498725482 +0000 UTC m=+151.649660569" watchObservedRunningTime="2025-10-08 14:25:26.500117129 +0000 UTC m=+151.651052206" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.541348 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:26 crc kubenswrapper[4624]: E1008 14:25:26.543263 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:27.043238596 +0000 UTC m=+152.194173663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.602134 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lbx9m" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.643277 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:26 crc kubenswrapper[4624]: E1008 14:25:26.643579 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:27.143532208 +0000 UTC m=+152.294467295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.643837 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:26 crc kubenswrapper[4624]: E1008 14:25:26.644305 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:27.144298669 +0000 UTC m=+152.295233746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.718987 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 14:25:26 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Oct 08 14:25:26 crc kubenswrapper[4624]: [+]process-running ok Oct 08 14:25:26 crc kubenswrapper[4624]: healthz check failed Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.719037 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.745222 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:26 crc kubenswrapper[4624]: E1008 14:25:26.745595 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:27.245578227 +0000 UTC m=+152.396513304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.850326 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:26 crc kubenswrapper[4624]: E1008 14:25:26.850739 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:27.350622646 +0000 UTC m=+152.501557723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:26 crc kubenswrapper[4624]: I1008 14:25:26.951014 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:26 crc kubenswrapper[4624]: E1008 14:25:26.951760 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:27.45173879 +0000 UTC m=+152.602673877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.052506 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:27 crc kubenswrapper[4624]: E1008 14:25:27.052804 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:27.552792692 +0000 UTC m=+152.703727769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.059854 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wt4rs"] Oct 08 14:25:27 crc kubenswrapper[4624]: W1008 14:25:27.065787 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25349b6_d167_4846_884c_9c057b5c6491.slice/crio-1eef3d5baac2e952a81d4f42d3b1384d4789ffe4204dbb770df8f75238c8c150 WatchSource:0}: Error finding container 1eef3d5baac2e952a81d4f42d3b1384d4789ffe4204dbb770df8f75238c8c150: Status 404 returned error can't find the container with id 1eef3d5baac2e952a81d4f42d3b1384d4789ffe4204dbb770df8f75238c8c150 Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.092776 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.092816 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.149050 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5q99v" Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.153552 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:27 crc kubenswrapper[4624]: E1008 14:25:27.155045 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:27.655024146 +0000 UTC m=+152.805959223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.255154 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:27 crc kubenswrapper[4624]: E1008 14:25:27.256723 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:27.756707458 +0000 UTC m=+152.907642535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.288802 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mnjz6"] Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.309405 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.310997 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.312175 4624 patch_prober.go:28] interesting pod/apiserver-76f77b778f-ljq2l container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.22:8443/livez\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.312211 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" podUID="3c901fd1-c23c-47f0-b44f-102a1965abd6" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.22:8443/livez\": dial tcp 10.217.0.22:8443: connect: connection refused" Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.346199 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qgsc5"] Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.357449 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:27 crc kubenswrapper[4624]: E1008 14:25:27.358512 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:27.858496508 +0000 UTC m=+153.009431585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.459735 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:27 crc kubenswrapper[4624]: E1008 14:25:27.460154 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:27.960135014 +0000 UTC m=+153.111070091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.477858 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebee26eb-b32e-459d-b4d2-7a36a325f08b" path="/var/lib/kubelet/pods/ebee26eb-b32e-459d-b4d2-7a36a325f08b/volumes" Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.478780 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgsc5" event={"ID":"e17b3f85-b77b-4639-9993-66db89499fcf","Type":"ContainerStarted","Data":"2c68f34db854b7a2ef01fb3e19917d4f20887b6aba40de6f76134628d29c1a2e"} Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.478918 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wt4rs" event={"ID":"a25349b6-d167-4846-884c-9c057b5c6491","Type":"ContainerStarted","Data":"1eef3d5baac2e952a81d4f42d3b1384d4789ffe4204dbb770df8f75238c8c150"} Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.492806 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mnjz6" event={"ID":"78f43ab9-6c12-4d67-9c33-b186ebcef93c","Type":"ContainerStarted","Data":"4e64a5963ebbd37aee2b296ccebce9a68127691876dbba730ab62b0532a8bcf4"} Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.492843 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.540292 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.563338 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:27 crc kubenswrapper[4624]: E1008 14:25:27.563518 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:28.063487718 +0000 UTC m=+153.214422795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.563680 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:27 crc kubenswrapper[4624]: E1008 14:25:27.565335 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:28.065322138 +0000 UTC m=+153.216257275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.665582 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:27 crc kubenswrapper[4624]: E1008 14:25:27.666040 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:28.16602179 +0000 UTC m=+153.316956867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.713114 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.719506 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 14:25:27 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Oct 08 14:25:27 crc kubenswrapper[4624]: [+]process-running ok Oct 08 14:25:27 crc kubenswrapper[4624]: healthz check failed Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.719569 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.767396 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lbx9m"] Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.767652 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:27 crc kubenswrapper[4624]: E1008 14:25:27.768044 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:28.268030278 +0000 UTC m=+153.418965365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:27 crc kubenswrapper[4624]: W1008 14:25:27.775314 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod273358a6_107e_4e31_aaff_1f825924ef2d.slice/crio-fe73e23ce89f2a44b9452c93b05e8131c0fdfe473a7cdce6d9bb644f6530039f WatchSource:0}: Error finding container fe73e23ce89f2a44b9452c93b05e8131c0fdfe473a7cdce6d9bb644f6530039f: Status 404 returned error can't find the container with id fe73e23ce89f2a44b9452c93b05e8131c0fdfe473a7cdce6d9bb644f6530039f Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.808975 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h9k57" Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.870187 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:27 crc kubenswrapper[4624]: E1008 14:25:27.871487 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:28.371469114 +0000 UTC m=+153.522404201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.916425 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5npx6"] Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.917608 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5npx6" Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.934514 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.964271 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5npx6"] Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.971352 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtw52\" (UniqueName: \"kubernetes.io/projected/76cec727-339f-4269-b0ee-d4aec3b0d6e3-kube-api-access-dtw52\") pod \"redhat-marketplace-5npx6\" (UID: \"76cec727-339f-4269-b0ee-d4aec3b0d6e3\") " pod="openshift-marketplace/redhat-marketplace-5npx6" Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.971399 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.971460 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76cec727-339f-4269-b0ee-d4aec3b0d6e3-utilities\") pod \"redhat-marketplace-5npx6\" (UID: \"76cec727-339f-4269-b0ee-d4aec3b0d6e3\") " pod="openshift-marketplace/redhat-marketplace-5npx6" Oct 08 14:25:27 crc kubenswrapper[4624]: I1008 14:25:27.971503 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76cec727-339f-4269-b0ee-d4aec3b0d6e3-catalog-content\") pod \"redhat-marketplace-5npx6\" (UID: \"76cec727-339f-4269-b0ee-d4aec3b0d6e3\") " pod="openshift-marketplace/redhat-marketplace-5npx6" Oct 08 14:25:27 crc kubenswrapper[4624]: E1008 14:25:27.971798 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:28.471786027 +0000 UTC m=+153.622721094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.075707 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.076017 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76cec727-339f-4269-b0ee-d4aec3b0d6e3-utilities\") pod \"redhat-marketplace-5npx6\" (UID: \"76cec727-339f-4269-b0ee-d4aec3b0d6e3\") " pod="openshift-marketplace/redhat-marketplace-5npx6" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.076089 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76cec727-339f-4269-b0ee-d4aec3b0d6e3-catalog-content\") pod \"redhat-marketplace-5npx6\" (UID: \"76cec727-339f-4269-b0ee-d4aec3b0d6e3\") " pod="openshift-marketplace/redhat-marketplace-5npx6" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.076121 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtw52\" (UniqueName: \"kubernetes.io/projected/76cec727-339f-4269-b0ee-d4aec3b0d6e3-kube-api-access-dtw52\") pod \"redhat-marketplace-5npx6\" (UID: \"76cec727-339f-4269-b0ee-d4aec3b0d6e3\") " pod="openshift-marketplace/redhat-marketplace-5npx6" Oct 08 14:25:28 crc kubenswrapper[4624]: E1008 14:25:28.076563 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:28.576544998 +0000 UTC m=+153.727480075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.076844 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76cec727-339f-4269-b0ee-d4aec3b0d6e3-utilities\") pod \"redhat-marketplace-5npx6\" (UID: \"76cec727-339f-4269-b0ee-d4aec3b0d6e3\") " pod="openshift-marketplace/redhat-marketplace-5npx6" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.079020 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76cec727-339f-4269-b0ee-d4aec3b0d6e3-catalog-content\") pod \"redhat-marketplace-5npx6\" (UID: \"76cec727-339f-4269-b0ee-d4aec3b0d6e3\") " pod="openshift-marketplace/redhat-marketplace-5npx6" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.093326 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.099198 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.124661 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtw52\" (UniqueName: \"kubernetes.io/projected/76cec727-339f-4269-b0ee-d4aec3b0d6e3-kube-api-access-dtw52\") pod \"redhat-marketplace-5npx6\" (UID: \"76cec727-339f-4269-b0ee-d4aec3b0d6e3\") " pod="openshift-marketplace/redhat-marketplace-5npx6" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.162378 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.163051 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.169182 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.169267 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.176809 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32878d45-6648-45f2-a1ff-773403c738ab-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"32878d45-6648-45f2-a1ff-773403c738ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.176929 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32878d45-6648-45f2-a1ff-773403c738ab-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"32878d45-6648-45f2-a1ff-773403c738ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.176957 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:28 crc kubenswrapper[4624]: E1008 14:25:28.177219 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:28.677208 +0000 UTC m=+153.828143077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.194353 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.276424 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sft5x"] Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.277558 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sft5x" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.277984 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:28 crc kubenswrapper[4624]: E1008 14:25:28.278174 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:28.778148949 +0000 UTC m=+153.929084026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.278250 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32878d45-6648-45f2-a1ff-773403c738ab-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"32878d45-6648-45f2-a1ff-773403c738ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.278277 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.278305 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32878d45-6648-45f2-a1ff-773403c738ab-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"32878d45-6648-45f2-a1ff-773403c738ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.278388 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32878d45-6648-45f2-a1ff-773403c738ab-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"32878d45-6648-45f2-a1ff-773403c738ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 14:25:28 crc kubenswrapper[4624]: E1008 14:25:28.278669 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:28.778660883 +0000 UTC m=+153.929595960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.324987 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32878d45-6648-45f2-a1ff-773403c738ab-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"32878d45-6648-45f2-a1ff-773403c738ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.327786 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5npx6" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.379003 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.379185 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7pn2\" (UniqueName: \"kubernetes.io/projected/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269-kube-api-access-r7pn2\") pod \"redhat-marketplace-sft5x\" (UID: \"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269\") " pod="openshift-marketplace/redhat-marketplace-sft5x" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.379232 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269-utilities\") pod \"redhat-marketplace-sft5x\" (UID: \"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269\") " pod="openshift-marketplace/redhat-marketplace-sft5x" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.379256 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269-catalog-content\") pod \"redhat-marketplace-sft5x\" (UID: \"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269\") " pod="openshift-marketplace/redhat-marketplace-sft5x" Oct 08 14:25:28 crc kubenswrapper[4624]: E1008 14:25:28.379381 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:28.879361126 +0000 UTC m=+154.030296203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.382005 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76ddt" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.414806 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sft5x"] Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.480673 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7pn2\" (UniqueName: \"kubernetes.io/projected/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269-kube-api-access-r7pn2\") pod \"redhat-marketplace-sft5x\" (UID: \"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269\") " pod="openshift-marketplace/redhat-marketplace-sft5x" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.480787 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269-utilities\") pod \"redhat-marketplace-sft5x\" (UID: \"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269\") " pod="openshift-marketplace/redhat-marketplace-sft5x" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.480830 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269-catalog-content\") pod \"redhat-marketplace-sft5x\" (UID: \"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269\") " pod="openshift-marketplace/redhat-marketplace-sft5x" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.480944 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:28 crc kubenswrapper[4624]: E1008 14:25:28.481224 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:28.98121255 +0000 UTC m=+154.132147627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.481461 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269-utilities\") pod \"redhat-marketplace-sft5x\" (UID: \"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269\") " pod="openshift-marketplace/redhat-marketplace-sft5x" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.483954 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269-catalog-content\") pod \"redhat-marketplace-sft5x\" (UID: \"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269\") " pod="openshift-marketplace/redhat-marketplace-sft5x" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.491318 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.510840 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgsc5" event={"ID":"e17b3f85-b77b-4639-9993-66db89499fcf","Type":"ContainerStarted","Data":"dec605386a48556e6a1e10c2873b52ce22193e47b7a5df18b0abe60e0c435b3a"} Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.512140 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbx9m" event={"ID":"273358a6-107e-4e31-aaff-1f825924ef2d","Type":"ContainerStarted","Data":"fe73e23ce89f2a44b9452c93b05e8131c0fdfe473a7cdce6d9bb644f6530039f"} Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.518608 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7pn2\" (UniqueName: \"kubernetes.io/projected/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269-kube-api-access-r7pn2\") pod \"redhat-marketplace-sft5x\" (UID: \"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269\") " pod="openshift-marketplace/redhat-marketplace-sft5x" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.520304 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wt4rs" event={"ID":"a25349b6-d167-4846-884c-9c057b5c6491","Type":"ContainerStarted","Data":"d6a0b45644eda26abf376e436789f0438402b423bd1c28b1cba7cfa7c6ca2257"} Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.538324 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mnjz6" event={"ID":"78f43ab9-6c12-4d67-9c33-b186ebcef93c","Type":"ContainerStarted","Data":"134135c13aee222daf6649a7cec24fa2aacd2129947193dbe31112f637d6ee1a"} Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.584349 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:28 crc kubenswrapper[4624]: E1008 14:25:28.584568 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:29.084538713 +0000 UTC m=+154.235473790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.584645 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:28 crc kubenswrapper[4624]: E1008 14:25:28.585095 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:29.085082907 +0000 UTC m=+154.236018034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.592734 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sft5x" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.614373 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.629813 4624 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-5dd84 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 08 14:25:28 crc kubenswrapper[4624]: [+]log ok Oct 08 14:25:28 crc kubenswrapper[4624]: [+]etcd ok Oct 08 14:25:28 crc kubenswrapper[4624]: [+]etcd-readiness ok Oct 08 14:25:28 crc kubenswrapper[4624]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 08 14:25:28 crc kubenswrapper[4624]: [-]informer-sync failed: reason withheld Oct 08 14:25:28 crc kubenswrapper[4624]: [+]poststarthook/generic-apiserver-start-informers ok Oct 08 14:25:28 crc kubenswrapper[4624]: [+]poststarthook/max-in-flight-filter ok Oct 08 14:25:28 crc kubenswrapper[4624]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 08 14:25:28 crc kubenswrapper[4624]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 08 14:25:28 crc kubenswrapper[4624]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 08 14:25:28 crc kubenswrapper[4624]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 08 14:25:28 crc kubenswrapper[4624]: [+]shutdown ok Oct 08 14:25:28 crc kubenswrapper[4624]: readyz check failed Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.629870 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" podUID="df49d2ff-640d-4cf1-812c-8b7275df6292" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.686094 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:28 crc kubenswrapper[4624]: E1008 14:25:28.687236 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:29.187189438 +0000 UTC m=+154.338124515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.718088 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 14:25:28 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Oct 08 14:25:28 crc kubenswrapper[4624]: [+]process-running ok Oct 08 14:25:28 crc kubenswrapper[4624]: healthz check failed Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.718134 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.787780 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:28 crc kubenswrapper[4624]: E1008 14:25:28.788073 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:29.288061715 +0000 UTC m=+154.438996792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.889224 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:28 crc kubenswrapper[4624]: E1008 14:25:28.889356 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:29.389337013 +0000 UTC m=+154.540272090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.889554 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:28 crc kubenswrapper[4624]: E1008 14:25:28.889908 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:29.389901069 +0000 UTC m=+154.540836146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.906980 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7mwws"] Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.914787 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mwws" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.918217 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.990405 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.990649 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e21258b9-106f-4b15-aa2c-7e65598341c2-catalog-content\") pod \"redhat-operators-7mwws\" (UID: \"e21258b9-106f-4b15-aa2c-7e65598341c2\") " pod="openshift-marketplace/redhat-operators-7mwws" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.990716 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e21258b9-106f-4b15-aa2c-7e65598341c2-utilities\") pod \"redhat-operators-7mwws\" (UID: \"e21258b9-106f-4b15-aa2c-7e65598341c2\") " pod="openshift-marketplace/redhat-operators-7mwws" Oct 08 14:25:28 crc kubenswrapper[4624]: I1008 14:25:28.990759 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqf4z\" (UniqueName: \"kubernetes.io/projected/e21258b9-106f-4b15-aa2c-7e65598341c2-kube-api-access-fqf4z\") pod \"redhat-operators-7mwws\" (UID: \"e21258b9-106f-4b15-aa2c-7e65598341c2\") " pod="openshift-marketplace/redhat-operators-7mwws" Oct 08 14:25:28 crc kubenswrapper[4624]: E1008 14:25:28.990884 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:29.490868009 +0000 UTC m=+154.641803086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.021890 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7mwws"] Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.093679 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e21258b9-106f-4b15-aa2c-7e65598341c2-catalog-content\") pod \"redhat-operators-7mwws\" (UID: \"e21258b9-106f-4b15-aa2c-7e65598341c2\") " pod="openshift-marketplace/redhat-operators-7mwws" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.093961 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e21258b9-106f-4b15-aa2c-7e65598341c2-utilities\") pod \"redhat-operators-7mwws\" (UID: \"e21258b9-106f-4b15-aa2c-7e65598341c2\") " pod="openshift-marketplace/redhat-operators-7mwws" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.094099 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.094194 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqf4z\" (UniqueName: \"kubernetes.io/projected/e21258b9-106f-4b15-aa2c-7e65598341c2-kube-api-access-fqf4z\") pod \"redhat-operators-7mwws\" (UID: \"e21258b9-106f-4b15-aa2c-7e65598341c2\") " pod="openshift-marketplace/redhat-operators-7mwws" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.095136 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e21258b9-106f-4b15-aa2c-7e65598341c2-catalog-content\") pod \"redhat-operators-7mwws\" (UID: \"e21258b9-106f-4b15-aa2c-7e65598341c2\") " pod="openshift-marketplace/redhat-operators-7mwws" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.095435 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e21258b9-106f-4b15-aa2c-7e65598341c2-utilities\") pod \"redhat-operators-7mwws\" (UID: \"e21258b9-106f-4b15-aa2c-7e65598341c2\") " pod="openshift-marketplace/redhat-operators-7mwws" Oct 08 14:25:29 crc kubenswrapper[4624]: E1008 14:25:29.095726 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:29.595715143 +0000 UTC m=+154.746650220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.102072 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5npx6"] Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.174798 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqf4z\" (UniqueName: \"kubernetes.io/projected/e21258b9-106f-4b15-aa2c-7e65598341c2-kube-api-access-fqf4z\") pod \"redhat-operators-7mwws\" (UID: \"e21258b9-106f-4b15-aa2c-7e65598341c2\") " pod="openshift-marketplace/redhat-operators-7mwws" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.194818 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:29 crc kubenswrapper[4624]: E1008 14:25:29.195380 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:29.695364227 +0000 UTC m=+154.846299304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.203572 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.245021 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mwws" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.298234 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.298307 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9jhqk"] Oct 08 14:25:29 crc kubenswrapper[4624]: E1008 14:25:29.298539 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:29.798528136 +0000 UTC m=+154.949463213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.299767 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jhqk" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.356464 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9jhqk"] Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.399388 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:29 crc kubenswrapper[4624]: E1008 14:25:29.399771 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:29.899747533 +0000 UTC m=+155.050682610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.400112 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwvsh\" (UniqueName: \"kubernetes.io/projected/7cd6bd92-2cce-46a2-881d-57e97e9b00bc-kube-api-access-lwvsh\") pod \"redhat-operators-9jhqk\" (UID: \"7cd6bd92-2cce-46a2-881d-57e97e9b00bc\") " pod="openshift-marketplace/redhat-operators-9jhqk" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.400177 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.400225 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cd6bd92-2cce-46a2-881d-57e97e9b00bc-utilities\") pod \"redhat-operators-9jhqk\" (UID: \"7cd6bd92-2cce-46a2-881d-57e97e9b00bc\") " pod="openshift-marketplace/redhat-operators-9jhqk" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.400252 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cd6bd92-2cce-46a2-881d-57e97e9b00bc-catalog-content\") pod \"redhat-operators-9jhqk\" (UID: \"7cd6bd92-2cce-46a2-881d-57e97e9b00bc\") " pod="openshift-marketplace/redhat-operators-9jhqk" Oct 08 14:25:29 crc kubenswrapper[4624]: E1008 14:25:29.400556 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:29.900544794 +0000 UTC m=+155.051479871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.480935 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sft5x"] Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.501217 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.501395 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cd6bd92-2cce-46a2-881d-57e97e9b00bc-utilities\") pod \"redhat-operators-9jhqk\" (UID: \"7cd6bd92-2cce-46a2-881d-57e97e9b00bc\") " pod="openshift-marketplace/redhat-operators-9jhqk" Oct 08 14:25:29 crc kubenswrapper[4624]: E1008 14:25:29.501458 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:30.001427682 +0000 UTC m=+155.152362759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.501489 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cd6bd92-2cce-46a2-881d-57e97e9b00bc-catalog-content\") pod \"redhat-operators-9jhqk\" (UID: \"7cd6bd92-2cce-46a2-881d-57e97e9b00bc\") " pod="openshift-marketplace/redhat-operators-9jhqk" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.501539 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwvsh\" (UniqueName: \"kubernetes.io/projected/7cd6bd92-2cce-46a2-881d-57e97e9b00bc-kube-api-access-lwvsh\") pod \"redhat-operators-9jhqk\" (UID: \"7cd6bd92-2cce-46a2-881d-57e97e9b00bc\") " pod="openshift-marketplace/redhat-operators-9jhqk" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.501563 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.501822 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cd6bd92-2cce-46a2-881d-57e97e9b00bc-utilities\") pod \"redhat-operators-9jhqk\" (UID: \"7cd6bd92-2cce-46a2-881d-57e97e9b00bc\") " pod="openshift-marketplace/redhat-operators-9jhqk" Oct 08 14:25:29 crc kubenswrapper[4624]: E1008 14:25:29.501832 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:30.001825812 +0000 UTC m=+155.152760889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.502215 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cd6bd92-2cce-46a2-881d-57e97e9b00bc-catalog-content\") pod \"redhat-operators-9jhqk\" (UID: \"7cd6bd92-2cce-46a2-881d-57e97e9b00bc\") " pod="openshift-marketplace/redhat-operators-9jhqk" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.546999 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwvsh\" (UniqueName: \"kubernetes.io/projected/7cd6bd92-2cce-46a2-881d-57e97e9b00bc-kube-api-access-lwvsh\") pod \"redhat-operators-9jhqk\" (UID: \"7cd6bd92-2cce-46a2-881d-57e97e9b00bc\") " pod="openshift-marketplace/redhat-operators-9jhqk" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.558236 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sft5x" event={"ID":"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269","Type":"ContainerStarted","Data":"bafb6ae460710927160355ec68a77fcf47c2dd770bbd830f6039c30aa2390451"} Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.560042 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbx9m" event={"ID":"273358a6-107e-4e31-aaff-1f825924ef2d","Type":"ContainerStarted","Data":"753c125f23dc2127485d70f25c7c8260a15723bc0f447ae28827be4255dae2d0"} Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.560849 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"32878d45-6648-45f2-a1ff-773403c738ab","Type":"ContainerStarted","Data":"1ea64497106ac0dc31545d587950c7fbaa1bde910fea7a2cafd52ec7e122baf7"} Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.567274 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5npx6" event={"ID":"76cec727-339f-4269-b0ee-d4aec3b0d6e3","Type":"ContainerStarted","Data":"e9a851a9fca22441aae4e7f54eef386eab5bdad7ad3f3f7c004943a681e5ccea"} Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.597326 4624 generic.go:334] "Generic (PLEG): container finished" podID="a25349b6-d167-4846-884c-9c057b5c6491" containerID="d6a0b45644eda26abf376e436789f0438402b423bd1c28b1cba7cfa7c6ca2257" exitCode=0 Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.597413 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wt4rs" event={"ID":"a25349b6-d167-4846-884c-9c057b5c6491","Type":"ContainerDied","Data":"d6a0b45644eda26abf376e436789f0438402b423bd1c28b1cba7cfa7c6ca2257"} Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.601857 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.602200 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:29 crc kubenswrapper[4624]: E1008 14:25:29.602343 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:30.10232022 +0000 UTC m=+155.253255287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.602413 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:29 crc kubenswrapper[4624]: E1008 14:25:29.602687 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:30.102676869 +0000 UTC m=+155.253611946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.613927 4624 generic.go:334] "Generic (PLEG): container finished" podID="78f43ab9-6c12-4d67-9c33-b186ebcef93c" containerID="134135c13aee222daf6649a7cec24fa2aacd2129947193dbe31112f637d6ee1a" exitCode=0 Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.614014 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mnjz6" event={"ID":"78f43ab9-6c12-4d67-9c33-b186ebcef93c","Type":"ContainerDied","Data":"134135c13aee222daf6649a7cec24fa2aacd2129947193dbe31112f637d6ee1a"} Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.635912 4624 generic.go:334] "Generic (PLEG): container finished" podID="e17b3f85-b77b-4639-9993-66db89499fcf" containerID="dec605386a48556e6a1e10c2873b52ce22193e47b7a5df18b0abe60e0c435b3a" exitCode=0 Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.636884 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgsc5" event={"ID":"e17b3f85-b77b-4639-9993-66db89499fcf","Type":"ContainerDied","Data":"dec605386a48556e6a1e10c2873b52ce22193e47b7a5df18b0abe60e0c435b3a"} Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.658195 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jhqk" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.706687 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:29 crc kubenswrapper[4624]: E1008 14:25:29.707637 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:30.207619876 +0000 UTC m=+155.358554953 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.721272 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 14:25:29 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Oct 08 14:25:29 crc kubenswrapper[4624]: [+]process-running ok Oct 08 14:25:29 crc kubenswrapper[4624]: healthz check failed Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.721324 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.807758 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:29 crc kubenswrapper[4624]: E1008 14:25:29.808151 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:30.308135664 +0000 UTC m=+155.459070731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.908321 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:29 crc kubenswrapper[4624]: E1008 14:25:29.908506 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:30.408480597 +0000 UTC m=+155.559415674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:29 crc kubenswrapper[4624]: I1008 14:25:29.908604 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:29 crc kubenswrapper[4624]: E1008 14:25:29.908907 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:30.408898768 +0000 UTC m=+155.559833845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.009421 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:30 crc kubenswrapper[4624]: E1008 14:25:30.009664 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:30.509603301 +0000 UTC m=+155.660538378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.009786 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:30 crc kubenswrapper[4624]: E1008 14:25:30.010368 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:30.510356801 +0000 UTC m=+155.661291878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.066915 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7mwws"] Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.076107 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.076156 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:25:30 crc kubenswrapper[4624]: W1008 14:25:30.076571 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode21258b9_106f_4b15_aa2c_7e65598341c2.slice/crio-7cb2831375c421f121c7940bd6a4d3cad38483d05aaa8707b0f1913c86ceb8d8 WatchSource:0}: Error finding container 7cb2831375c421f121c7940bd6a4d3cad38483d05aaa8707b0f1913c86ceb8d8: Status 404 returned error can't find the container with id 7cb2831375c421f121c7940bd6a4d3cad38483d05aaa8707b0f1913c86ceb8d8 Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.112165 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:30 crc kubenswrapper[4624]: E1008 14:25:30.112527 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:30.612501643 +0000 UTC m=+155.763436720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.215417 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:30 crc kubenswrapper[4624]: E1008 14:25:30.215871 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:30.715850316 +0000 UTC m=+155.866785393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.319153 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:30 crc kubenswrapper[4624]: E1008 14:25:30.319686 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:30.819669623 +0000 UTC m=+155.970604700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.421114 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:30 crc kubenswrapper[4624]: E1008 14:25:30.421474 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:30.921459355 +0000 UTC m=+156.072394422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.454329 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9jhqk"] Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.521883 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:30 crc kubenswrapper[4624]: E1008 14:25:30.522089 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:31.022058775 +0000 UTC m=+156.172993852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.522279 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:30 crc kubenswrapper[4624]: E1008 14:25:30.522628 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:31.02260397 +0000 UTC m=+156.173539047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.584464 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.585988 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.602990 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.605678 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.615707 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.623603 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.623854 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ac5549a-aa6e-4136-ba8f-72ea118d92e9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6ac5549a-aa6e-4136-ba8f-72ea118d92e9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.623939 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ac5549a-aa6e-4136-ba8f-72ea118d92e9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6ac5549a-aa6e-4136-ba8f-72ea118d92e9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 14:25:30 crc kubenswrapper[4624]: E1008 14:25:30.624150 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:31.124130325 +0000 UTC m=+156.275065402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.666907 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mwws" event={"ID":"e21258b9-106f-4b15-aa2c-7e65598341c2","Type":"ContainerStarted","Data":"7cb2831375c421f121c7940bd6a4d3cad38483d05aaa8707b0f1913c86ceb8d8"} Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.677294 4624 generic.go:334] "Generic (PLEG): container finished" podID="273358a6-107e-4e31-aaff-1f825924ef2d" containerID="753c125f23dc2127485d70f25c7c8260a15723bc0f447ae28827be4255dae2d0" exitCode=0 Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.678108 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbx9m" event={"ID":"273358a6-107e-4e31-aaff-1f825924ef2d","Type":"ContainerDied","Data":"753c125f23dc2127485d70f25c7c8260a15723bc0f447ae28827be4255dae2d0"} Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.691001 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"32878d45-6648-45f2-a1ff-773403c738ab","Type":"ContainerStarted","Data":"e841704197eeae700b134eb32756d31773d373d5d174a73f52151a1cd0aedee9"} Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.692927 4624 generic.go:334] "Generic (PLEG): container finished" podID="76cec727-339f-4269-b0ee-d4aec3b0d6e3" containerID="43108ef7aef74ae691aad9aea7b72a7bb1b6cab74956679f58e0f55c63303309" exitCode=0 Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.692970 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5npx6" event={"ID":"76cec727-339f-4269-b0ee-d4aec3b0d6e3","Type":"ContainerDied","Data":"43108ef7aef74ae691aad9aea7b72a7bb1b6cab74956679f58e0f55c63303309"} Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.693591 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jhqk" event={"ID":"7cd6bd92-2cce-46a2-881d-57e97e9b00bc","Type":"ContainerStarted","Data":"5ed6019b779331875019f4f2ade036a3f5931593788cb8a19c2adff12efa294c"} Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.707932 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wmls6" event={"ID":"66542923-c777-4a99-a3a7-9ab975b8b0c3","Type":"ContainerStarted","Data":"bbb2286a02110704cfd0730c388abb88c0970df55c2f92d49fd9dae9218473be"} Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.727190 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ac5549a-aa6e-4136-ba8f-72ea118d92e9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6ac5549a-aa6e-4136-ba8f-72ea118d92e9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.727267 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ac5549a-aa6e-4136-ba8f-72ea118d92e9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6ac5549a-aa6e-4136-ba8f-72ea118d92e9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.727296 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:30 crc kubenswrapper[4624]: E1008 14:25:30.727628 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:31.227615682 +0000 UTC m=+156.378550759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.728698 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ac5549a-aa6e-4136-ba8f-72ea118d92e9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6ac5549a-aa6e-4136-ba8f-72ea118d92e9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.728850 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 14:25:30 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Oct 08 14:25:30 crc kubenswrapper[4624]: [+]process-running ok Oct 08 14:25:30 crc kubenswrapper[4624]: healthz check failed Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.728879 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.787013 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ac5549a-aa6e-4136-ba8f-72ea118d92e9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6ac5549a-aa6e-4136-ba8f-72ea118d92e9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.828582 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:30 crc kubenswrapper[4624]: E1008 14:25:30.828875 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:31.328860759 +0000 UTC m=+156.479795836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.919085 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 14:25:30 crc kubenswrapper[4624]: I1008 14:25:30.930075 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:30 crc kubenswrapper[4624]: E1008 14:25:30.930373 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:31.430360744 +0000 UTC m=+156.581295821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.030962 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:31 crc kubenswrapper[4624]: E1008 14:25:31.031113 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:31.531085867 +0000 UTC m=+156.682020944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.031184 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:31 crc kubenswrapper[4624]: E1008 14:25:31.031531 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:31.531523249 +0000 UTC m=+156.682458326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.133727 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:31 crc kubenswrapper[4624]: E1008 14:25:31.134135 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:31.634119111 +0000 UTC m=+156.785054188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.235534 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:31 crc kubenswrapper[4624]: E1008 14:25:31.236070 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:31.736057867 +0000 UTC m=+156.886992944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.338241 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:31 crc kubenswrapper[4624]: E1008 14:25:31.338687 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:31.83863001 +0000 UTC m=+156.989565087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.372562 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.440417 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:31 crc kubenswrapper[4624]: E1008 14:25:31.440778 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:31.940757981 +0000 UTC m=+157.091693118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.542109 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:31 crc kubenswrapper[4624]: E1008 14:25:31.542415 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:32.042400099 +0000 UTC m=+157.193335176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.644421 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:31 crc kubenswrapper[4624]: E1008 14:25:31.644816 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:32.144804358 +0000 UTC m=+157.295739435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.716813 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 14:25:31 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Oct 08 14:25:31 crc kubenswrapper[4624]: [+]process-running ok Oct 08 14:25:31 crc kubenswrapper[4624]: healthz check failed Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.717162 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.721799 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8da3440-b5d4-46cc-a1ea-d3e4b7f03269" containerID="7105753a4b4d5202c13a4ac6ba50af22f6e04665245c83b9ce4d6eec22c7cb3b" exitCode=0 Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.721871 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sft5x" event={"ID":"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269","Type":"ContainerDied","Data":"7105753a4b4d5202c13a4ac6ba50af22f6e04665245c83b9ce4d6eec22c7cb3b"} Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.723562 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jhqk" event={"ID":"7cd6bd92-2cce-46a2-881d-57e97e9b00bc","Type":"ContainerStarted","Data":"72ed6bfa3173c450433e831ce177db22a9b58ac9b068d0433e974d0d6d41b110"} Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.727276 4624 generic.go:334] "Generic (PLEG): container finished" podID="e21258b9-106f-4b15-aa2c-7e65598341c2" containerID="da8f4f81c3b5e5e6e774bbceb0c03c560db3b333aed6b8ac5b3a67be7dbc7b06" exitCode=0 Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.727370 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mwws" event={"ID":"e21258b9-106f-4b15-aa2c-7e65598341c2","Type":"ContainerDied","Data":"da8f4f81c3b5e5e6e774bbceb0c03c560db3b333aed6b8ac5b3a67be7dbc7b06"} Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.728581 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6ac5549a-aa6e-4136-ba8f-72ea118d92e9","Type":"ContainerStarted","Data":"6043566d13ac4331b052b1a0a5f167f8db14c796d54d2f0d93f003a02661bc8e"} Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.742272 4624 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.746215 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:31 crc kubenswrapper[4624]: E1008 14:25:31.746376 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:32.246362004 +0000 UTC m=+157.397297081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.747336 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:31 crc kubenswrapper[4624]: E1008 14:25:31.747673 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:32.247664579 +0000 UTC m=+157.398599656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.816847 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.816828205 podStartE2EDuration="3.816828205s" podCreationTimestamp="2025-10-08 14:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:31.815199201 +0000 UTC m=+156.966134278" watchObservedRunningTime="2025-10-08 14:25:31.816828205 +0000 UTC m=+156.967763292" Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.849297 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:31 crc kubenswrapper[4624]: E1008 14:25:31.849589 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:32.349540253 +0000 UTC m=+157.500475330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.849685 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:31 crc kubenswrapper[4624]: E1008 14:25:31.850067 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:32.350058137 +0000 UTC m=+157.500993394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.951005 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:31 crc kubenswrapper[4624]: E1008 14:25:31.951274 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:32.451226172 +0000 UTC m=+157.602161249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:31 crc kubenswrapper[4624]: I1008 14:25:31.951476 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:31 crc kubenswrapper[4624]: E1008 14:25:31.951968 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:32.451949881 +0000 UTC m=+157.602884958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.053004 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:32 crc kubenswrapper[4624]: E1008 14:25:32.053246 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:32.553211829 +0000 UTC m=+157.704146916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.053631 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:32 crc kubenswrapper[4624]: E1008 14:25:32.054073 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:32.554062782 +0000 UTC m=+157.704997859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.101529 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5dd84" Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.155216 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:32 crc kubenswrapper[4624]: E1008 14:25:32.155411 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:32.655377081 +0000 UTC m=+157.806312158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.156059 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:32 crc kubenswrapper[4624]: E1008 14:25:32.156634 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:32.656606294 +0000 UTC m=+157.807541521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.257756 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:32 crc kubenswrapper[4624]: E1008 14:25:32.257965 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:32.757926004 +0000 UTC m=+157.908861071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.258147 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:32 crc kubenswrapper[4624]: E1008 14:25:32.258504 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:32.758483179 +0000 UTC m=+157.909418256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.314762 4624 patch_prober.go:28] interesting pod/apiserver-76f77b778f-ljq2l container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 08 14:25:32 crc kubenswrapper[4624]: [+]log ok Oct 08 14:25:32 crc kubenswrapper[4624]: [+]etcd ok Oct 08 14:25:32 crc kubenswrapper[4624]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 08 14:25:32 crc kubenswrapper[4624]: [+]poststarthook/generic-apiserver-start-informers ok Oct 08 14:25:32 crc kubenswrapper[4624]: [+]poststarthook/max-in-flight-filter ok Oct 08 14:25:32 crc kubenswrapper[4624]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 08 14:25:32 crc kubenswrapper[4624]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 08 14:25:32 crc kubenswrapper[4624]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Oct 08 14:25:32 crc kubenswrapper[4624]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 08 14:25:32 crc kubenswrapper[4624]: [+]poststarthook/project.openshift.io-projectcache ok Oct 08 14:25:32 crc kubenswrapper[4624]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 08 14:25:32 crc kubenswrapper[4624]: [+]poststarthook/openshift.io-startinformers ok Oct 08 14:25:32 crc kubenswrapper[4624]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 08 14:25:32 crc kubenswrapper[4624]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 08 14:25:32 crc kubenswrapper[4624]: livez check failed Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.314902 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" podUID="3c901fd1-c23c-47f0-b44f-102a1965abd6" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.359375 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:32 crc kubenswrapper[4624]: E1008 14:25:32.359571 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:32.859545051 +0000 UTC m=+158.010480128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.359901 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:32 crc kubenswrapper[4624]: E1008 14:25:32.360194 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:32.860181088 +0000 UTC m=+158.011116165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.461002 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:32 crc kubenswrapper[4624]: E1008 14:25:32.461212 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:32.961180459 +0000 UTC m=+158.112115536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.461340 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:32 crc kubenswrapper[4624]: E1008 14:25:32.461752 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:32.961743074 +0000 UTC m=+158.112678151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.562500 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:32 crc kubenswrapper[4624]: E1008 14:25:32.562802 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 14:25:33.062773726 +0000 UTC m=+158.213708813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.562928 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:32 crc kubenswrapper[4624]: E1008 14:25:32.563271 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 14:25:33.063256809 +0000 UTC m=+158.214191896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qr5w8" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.619828 4624 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-08T14:25:31.742304975Z","Handler":null,"Name":""} Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.640808 4624 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.640866 4624 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.673169 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.681333 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.714704 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 14:25:32 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Oct 08 14:25:32 crc kubenswrapper[4624]: [+]process-running ok Oct 08 14:25:32 crc kubenswrapper[4624]: healthz check failed Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.714763 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.737515 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wmls6" event={"ID":"66542923-c777-4a99-a3a7-9ab975b8b0c3","Type":"ContainerStarted","Data":"517c75e9f6abfd54040d7bd021dd1d9095fb3dbf036396df29bb98e2ca59a45e"} Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.737557 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wmls6" event={"ID":"66542923-c777-4a99-a3a7-9ab975b8b0c3","Type":"ContainerStarted","Data":"7fbcf1ac2ce992b204e7c84eb26ed1a64eb0cfbff45a4df5737b6f5e83067f92"} Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.739494 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6ac5549a-aa6e-4136-ba8f-72ea118d92e9","Type":"ContainerStarted","Data":"c53050811d2f7caab6f046ef6fb738336353f87df98c5b9aaa64a4fc88bf5149"} Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.744340 4624 generic.go:334] "Generic (PLEG): container finished" podID="32878d45-6648-45f2-a1ff-773403c738ab" containerID="e841704197eeae700b134eb32756d31773d373d5d174a73f52151a1cd0aedee9" exitCode=0 Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.744446 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"32878d45-6648-45f2-a1ff-773403c738ab","Type":"ContainerDied","Data":"e841704197eeae700b134eb32756d31773d373d5d174a73f52151a1cd0aedee9"} Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.748057 4624 generic.go:334] "Generic (PLEG): container finished" podID="7cd6bd92-2cce-46a2-881d-57e97e9b00bc" containerID="72ed6bfa3173c450433e831ce177db22a9b58ac9b068d0433e974d0d6d41b110" exitCode=0 Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.748570 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jhqk" event={"ID":"7cd6bd92-2cce-46a2-881d-57e97e9b00bc","Type":"ContainerDied","Data":"72ed6bfa3173c450433e831ce177db22a9b58ac9b068d0433e974d0d6d41b110"} Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.766011 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.76598709 podStartE2EDuration="2.76598709s" podCreationTimestamp="2025-10-08 14:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:32.754764439 +0000 UTC m=+157.905699526" watchObservedRunningTime="2025-10-08 14:25:32.76598709 +0000 UTC m=+157.916922167" Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.776441 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.786884 4624 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.786922 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:32 crc kubenswrapper[4624]: I1008 14:25:32.824161 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qr5w8\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:33 crc kubenswrapper[4624]: I1008 14:25:33.057868 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:33 crc kubenswrapper[4624]: I1008 14:25:33.337457 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qr5w8"] Oct 08 14:25:33 crc kubenswrapper[4624]: W1008 14:25:33.354445 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc439ba4c_3583_4d35_b586_c97f525345a6.slice/crio-bb2a4bc4a71a0b56fa89976397b0f831d1c14726d67b7bb7d2e37b983c46f4bb WatchSource:0}: Error finding container bb2a4bc4a71a0b56fa89976397b0f831d1c14726d67b7bb7d2e37b983c46f4bb: Status 404 returned error can't find the container with id bb2a4bc4a71a0b56fa89976397b0f831d1c14726d67b7bb7d2e37b983c46f4bb Oct 08 14:25:33 crc kubenswrapper[4624]: I1008 14:25:33.380542 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-px79m" Oct 08 14:25:33 crc kubenswrapper[4624]: I1008 14:25:33.489571 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 08 14:25:33 crc kubenswrapper[4624]: I1008 14:25:33.715301 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 14:25:33 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Oct 08 14:25:33 crc kubenswrapper[4624]: [+]process-running ok Oct 08 14:25:33 crc kubenswrapper[4624]: healthz check failed Oct 08 14:25:33 crc kubenswrapper[4624]: I1008 14:25:33.715348 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:33 crc kubenswrapper[4624]: I1008 14:25:33.795965 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" event={"ID":"c439ba4c-3583-4d35-b586-c97f525345a6","Type":"ContainerStarted","Data":"0ece3a8f03bad1ddf9c09bae2d8371c3df08b9002ff3c8dbb5a250769bc4ff18"} Oct 08 14:25:33 crc kubenswrapper[4624]: I1008 14:25:33.796007 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" event={"ID":"c439ba4c-3583-4d35-b586-c97f525345a6","Type":"ContainerStarted","Data":"bb2a4bc4a71a0b56fa89976397b0f831d1c14726d67b7bb7d2e37b983c46f4bb"} Oct 08 14:25:33 crc kubenswrapper[4624]: I1008 14:25:33.796024 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:33 crc kubenswrapper[4624]: I1008 14:25:33.800441 4624 generic.go:334] "Generic (PLEG): container finished" podID="6ac5549a-aa6e-4136-ba8f-72ea118d92e9" containerID="c53050811d2f7caab6f046ef6fb738336353f87df98c5b9aaa64a4fc88bf5149" exitCode=0 Oct 08 14:25:33 crc kubenswrapper[4624]: I1008 14:25:33.800702 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6ac5549a-aa6e-4136-ba8f-72ea118d92e9","Type":"ContainerDied","Data":"c53050811d2f7caab6f046ef6fb738336353f87df98c5b9aaa64a4fc88bf5149"} Oct 08 14:25:33 crc kubenswrapper[4624]: I1008 14:25:33.825502 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" podStartSLOduration=137.825439275 podStartE2EDuration="2m17.825439275s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:33.821059988 +0000 UTC m=+158.971995075" watchObservedRunningTime="2025-10-08 14:25:33.825439275 +0000 UTC m=+158.976374352" Oct 08 14:25:34 crc kubenswrapper[4624]: I1008 14:25:34.494212 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 14:25:34 crc kubenswrapper[4624]: I1008 14:25:34.499488 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32878d45-6648-45f2-a1ff-773403c738ab-kubelet-dir\") pod \"32878d45-6648-45f2-a1ff-773403c738ab\" (UID: \"32878d45-6648-45f2-a1ff-773403c738ab\") " Oct 08 14:25:34 crc kubenswrapper[4624]: I1008 14:25:34.499570 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32878d45-6648-45f2-a1ff-773403c738ab-kube-api-access\") pod \"32878d45-6648-45f2-a1ff-773403c738ab\" (UID: \"32878d45-6648-45f2-a1ff-773403c738ab\") " Oct 08 14:25:34 crc kubenswrapper[4624]: I1008 14:25:34.500445 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32878d45-6648-45f2-a1ff-773403c738ab-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "32878d45-6648-45f2-a1ff-773403c738ab" (UID: "32878d45-6648-45f2-a1ff-773403c738ab"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:25:34 crc kubenswrapper[4624]: I1008 14:25:34.527617 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-wmls6" podStartSLOduration=19.52758843 podStartE2EDuration="19.52758843s" podCreationTimestamp="2025-10-08 14:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:25:33.860664111 +0000 UTC m=+159.011599188" watchObservedRunningTime="2025-10-08 14:25:34.52758843 +0000 UTC m=+159.678523497" Oct 08 14:25:34 crc kubenswrapper[4624]: I1008 14:25:34.529044 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32878d45-6648-45f2-a1ff-773403c738ab-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "32878d45-6648-45f2-a1ff-773403c738ab" (UID: "32878d45-6648-45f2-a1ff-773403c738ab"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:25:34 crc kubenswrapper[4624]: I1008 14:25:34.600875 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32878d45-6648-45f2-a1ff-773403c738ab-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 08 14:25:34 crc kubenswrapper[4624]: I1008 14:25:34.600911 4624 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32878d45-6648-45f2-a1ff-773403c738ab-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 08 14:25:34 crc kubenswrapper[4624]: I1008 14:25:34.716104 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 14:25:34 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Oct 08 14:25:34 crc kubenswrapper[4624]: [+]process-running ok Oct 08 14:25:34 crc kubenswrapper[4624]: healthz check failed Oct 08 14:25:34 crc kubenswrapper[4624]: I1008 14:25:34.716197 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:34 crc kubenswrapper[4624]: I1008 14:25:34.891405 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 14:25:34 crc kubenswrapper[4624]: I1008 14:25:34.896387 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"32878d45-6648-45f2-a1ff-773403c738ab","Type":"ContainerDied","Data":"1ea64497106ac0dc31545d587950c7fbaa1bde910fea7a2cafd52ec7e122baf7"} Oct 08 14:25:34 crc kubenswrapper[4624]: I1008 14:25:34.896501 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ea64497106ac0dc31545d587950c7fbaa1bde910fea7a2cafd52ec7e122baf7" Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.300137 4624 patch_prober.go:28] interesting pod/console-f9d7485db-rgn2q container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.300178 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rgn2q" podUID="82676f42-aabb-4cee-b836-790a48dd9a2e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.401164 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.436519 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.539188 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ac5549a-aa6e-4136-ba8f-72ea118d92e9-kubelet-dir\") pod \"6ac5549a-aa6e-4136-ba8f-72ea118d92e9\" (UID: \"6ac5549a-aa6e-4136-ba8f-72ea118d92e9\") " Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.539321 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ac5549a-aa6e-4136-ba8f-72ea118d92e9-kube-api-access\") pod \"6ac5549a-aa6e-4136-ba8f-72ea118d92e9\" (UID: \"6ac5549a-aa6e-4136-ba8f-72ea118d92e9\") " Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.543637 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac5549a-aa6e-4136-ba8f-72ea118d92e9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6ac5549a-aa6e-4136-ba8f-72ea118d92e9" (UID: "6ac5549a-aa6e-4136-ba8f-72ea118d92e9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.543714 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ac5549a-aa6e-4136-ba8f-72ea118d92e9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6ac5549a-aa6e-4136-ba8f-72ea118d92e9" (UID: "6ac5549a-aa6e-4136-ba8f-72ea118d92e9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.551008 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-kl87s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.551052 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kl87s" podUID="60fc8c95-75b8-4032-b407-c0b21022da37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.551590 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-kl87s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.551611 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-kl87s" podUID="60fc8c95-75b8-4032-b407-c0b21022da37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.641106 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ac5549a-aa6e-4136-ba8f-72ea118d92e9-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.641156 4624 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ac5549a-aa6e-4136-ba8f-72ea118d92e9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.715904 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 14:25:35 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Oct 08 14:25:35 crc kubenswrapper[4624]: [+]process-running ok Oct 08 14:25:35 crc kubenswrapper[4624]: healthz check failed Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.716001 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.961827 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6ac5549a-aa6e-4136-ba8f-72ea118d92e9","Type":"ContainerDied","Data":"6043566d13ac4331b052b1a0a5f167f8db14c796d54d2f0d93f003a02661bc8e"} Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.961868 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6043566d13ac4331b052b1a0a5f167f8db14c796d54d2f0d93f003a02661bc8e" Oct 08 14:25:35 crc kubenswrapper[4624]: I1008 14:25:35.961920 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 14:25:36 crc kubenswrapper[4624]: I1008 14:25:36.716057 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 14:25:36 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Oct 08 14:25:36 crc kubenswrapper[4624]: [+]process-running ok Oct 08 14:25:36 crc kubenswrapper[4624]: healthz check failed Oct 08 14:25:36 crc kubenswrapper[4624]: I1008 14:25:36.716109 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:37 crc kubenswrapper[4624]: I1008 14:25:37.309678 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:37 crc kubenswrapper[4624]: I1008 14:25:37.316827 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-ljq2l" Oct 08 14:25:37 crc kubenswrapper[4624]: I1008 14:25:37.731433 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 14:25:37 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Oct 08 14:25:37 crc kubenswrapper[4624]: [+]process-running ok Oct 08 14:25:37 crc kubenswrapper[4624]: healthz check failed Oct 08 14:25:37 crc kubenswrapper[4624]: I1008 14:25:37.731975 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:38 crc kubenswrapper[4624]: I1008 14:25:38.182528 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs\") pod \"network-metrics-daemon-qrmz6\" (UID: \"8abf38af-8df3-49f9-9817-b4740d2a8b4a\") " pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:25:38 crc kubenswrapper[4624]: I1008 14:25:38.202723 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8abf38af-8df3-49f9-9817-b4740d2a8b4a-metrics-certs\") pod \"network-metrics-daemon-qrmz6\" (UID: \"8abf38af-8df3-49f9-9817-b4740d2a8b4a\") " pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:25:38 crc kubenswrapper[4624]: I1008 14:25:38.277468 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qrmz6" Oct 08 14:25:38 crc kubenswrapper[4624]: I1008 14:25:38.715252 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 14:25:38 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Oct 08 14:25:38 crc kubenswrapper[4624]: [+]process-running ok Oct 08 14:25:38 crc kubenswrapper[4624]: healthz check failed Oct 08 14:25:38 crc kubenswrapper[4624]: I1008 14:25:38.715311 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:39 crc kubenswrapper[4624]: I1008 14:25:39.720490 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 14:25:39 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Oct 08 14:25:39 crc kubenswrapper[4624]: [+]process-running ok Oct 08 14:25:39 crc kubenswrapper[4624]: healthz check failed Oct 08 14:25:39 crc kubenswrapper[4624]: I1008 14:25:39.720559 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:40 crc kubenswrapper[4624]: I1008 14:25:40.715172 4624 patch_prober.go:28] interesting pod/router-default-5444994796-fbw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 14:25:40 crc kubenswrapper[4624]: [+]has-synced ok Oct 08 14:25:40 crc kubenswrapper[4624]: [+]process-running ok Oct 08 14:25:40 crc kubenswrapper[4624]: healthz check failed Oct 08 14:25:40 crc kubenswrapper[4624]: I1008 14:25:40.715469 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fbw8l" podUID="d96a6e0b-5e82-48ad-b930-b7b82620830a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:25:41 crc kubenswrapper[4624]: I1008 14:25:41.721658 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:41 crc kubenswrapper[4624]: I1008 14:25:41.725620 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-fbw8l" Oct 08 14:25:45 crc kubenswrapper[4624]: I1008 14:25:45.300977 4624 patch_prober.go:28] interesting pod/console-f9d7485db-rgn2q container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Oct 08 14:25:45 crc kubenswrapper[4624]: I1008 14:25:45.301029 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rgn2q" podUID="82676f42-aabb-4cee-b836-790a48dd9a2e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Oct 08 14:25:45 crc kubenswrapper[4624]: I1008 14:25:45.551268 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-kl87s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Oct 08 14:25:45 crc kubenswrapper[4624]: I1008 14:25:45.551322 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kl87s" podUID="60fc8c95-75b8-4032-b407-c0b21022da37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Oct 08 14:25:45 crc kubenswrapper[4624]: I1008 14:25:45.551627 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-kl87s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Oct 08 14:25:45 crc kubenswrapper[4624]: I1008 14:25:45.551671 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-kl87s" podUID="60fc8c95-75b8-4032-b407-c0b21022da37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Oct 08 14:25:45 crc kubenswrapper[4624]: I1008 14:25:45.551696 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-kl87s" Oct 08 14:25:45 crc kubenswrapper[4624]: I1008 14:25:45.552256 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-kl87s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Oct 08 14:25:45 crc kubenswrapper[4624]: I1008 14:25:45.552289 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kl87s" podUID="60fc8c95-75b8-4032-b407-c0b21022da37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Oct 08 14:25:45 crc kubenswrapper[4624]: I1008 14:25:45.552223 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"7e3f13fdf273d65ef7cd399bfd88b1c97f51bac5df1166b1194e7078b7d7771d"} pod="openshift-console/downloads-7954f5f757-kl87s" containerMessage="Container download-server failed liveness probe, will be restarted" Oct 08 14:25:45 crc kubenswrapper[4624]: I1008 14:25:45.552324 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-kl87s" podUID="60fc8c95-75b8-4032-b407-c0b21022da37" containerName="download-server" containerID="cri-o://7e3f13fdf273d65ef7cd399bfd88b1c97f51bac5df1166b1194e7078b7d7771d" gracePeriod=2 Oct 08 14:25:46 crc kubenswrapper[4624]: I1008 14:25:46.092450 4624 generic.go:334] "Generic (PLEG): container finished" podID="60fc8c95-75b8-4032-b407-c0b21022da37" containerID="7e3f13fdf273d65ef7cd399bfd88b1c97f51bac5df1166b1194e7078b7d7771d" exitCode=0 Oct 08 14:25:46 crc kubenswrapper[4624]: I1008 14:25:46.092505 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-kl87s" event={"ID":"60fc8c95-75b8-4032-b407-c0b21022da37","Type":"ContainerDied","Data":"7e3f13fdf273d65ef7cd399bfd88b1c97f51bac5df1166b1194e7078b7d7771d"} Oct 08 14:25:53 crc kubenswrapper[4624]: I1008 14:25:53.073103 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:25:55 crc kubenswrapper[4624]: I1008 14:25:55.319571 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:55 crc kubenswrapper[4624]: I1008 14:25:55.324043 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:25:55 crc kubenswrapper[4624]: I1008 14:25:55.551684 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-kl87s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Oct 08 14:25:55 crc kubenswrapper[4624]: I1008 14:25:55.552099 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kl87s" podUID="60fc8c95-75b8-4032-b407-c0b21022da37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Oct 08 14:25:58 crc kubenswrapper[4624]: I1008 14:25:58.242159 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4xxrs" Oct 08 14:26:00 crc kubenswrapper[4624]: I1008 14:26:00.076668 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:26:00 crc kubenswrapper[4624]: I1008 14:26:00.076725 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:26:03 crc kubenswrapper[4624]: I1008 14:26:03.713350 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 14:26:03 crc kubenswrapper[4624]: I1008 14:26:03.815769 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qrmz6"] Oct 08 14:26:04 crc kubenswrapper[4624]: E1008 14:26:04.858134 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 08 14:26:04 crc kubenswrapper[4624]: E1008 14:26:04.858321 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79htk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-wt4rs_openshift-marketplace(a25349b6-d167-4846-884c-9c057b5c6491): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 08 14:26:04 crc kubenswrapper[4624]: E1008 14:26:04.859856 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-wt4rs" podUID="a25349b6-d167-4846-884c-9c057b5c6491" Oct 08 14:26:05 crc kubenswrapper[4624]: I1008 14:26:05.550706 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-kl87s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Oct 08 14:26:05 crc kubenswrapper[4624]: I1008 14:26:05.550757 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kl87s" podUID="60fc8c95-75b8-4032-b407-c0b21022da37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Oct 08 14:26:07 crc kubenswrapper[4624]: E1008 14:26:07.371993 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-wt4rs" podUID="a25349b6-d167-4846-884c-9c057b5c6491" Oct 08 14:26:11 crc kubenswrapper[4624]: W1008 14:26:11.401469 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8abf38af_8df3_49f9_9817_b4740d2a8b4a.slice/crio-72347f712935ac58fe6473800bbaa8140d5280fd746de06d4d2f5aa7b41a0f78 WatchSource:0}: Error finding container 72347f712935ac58fe6473800bbaa8140d5280fd746de06d4d2f5aa7b41a0f78: Status 404 returned error can't find the container with id 72347f712935ac58fe6473800bbaa8140d5280fd746de06d4d2f5aa7b41a0f78 Oct 08 14:26:11 crc kubenswrapper[4624]: E1008 14:26:11.467603 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 08 14:26:11 crc kubenswrapper[4624]: E1008 14:26:11.467766 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqf4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7mwws_openshift-marketplace(e21258b9-106f-4b15-aa2c-7e65598341c2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 08 14:26:11 crc kubenswrapper[4624]: E1008 14:26:11.469733 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-7mwws" podUID="e21258b9-106f-4b15-aa2c-7e65598341c2" Oct 08 14:26:11 crc kubenswrapper[4624]: E1008 14:26:11.511629 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 08 14:26:11 crc kubenswrapper[4624]: E1008 14:26:11.511811 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwvsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9jhqk_openshift-marketplace(7cd6bd92-2cce-46a2-881d-57e97e9b00bc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 08 14:26:11 crc kubenswrapper[4624]: E1008 14:26:11.513071 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9jhqk" podUID="7cd6bd92-2cce-46a2-881d-57e97e9b00bc" Oct 08 14:26:11 crc kubenswrapper[4624]: E1008 14:26:11.514013 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 08 14:26:11 crc kubenswrapper[4624]: E1008 14:26:11.514137 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwnrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qgsc5_openshift-marketplace(e17b3f85-b77b-4639-9993-66db89499fcf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 08 14:26:11 crc kubenswrapper[4624]: E1008 14:26:11.515293 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qgsc5" podUID="e17b3f85-b77b-4639-9993-66db89499fcf" Oct 08 14:26:11 crc kubenswrapper[4624]: E1008 14:26:11.549652 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 08 14:26:11 crc kubenswrapper[4624]: E1008 14:26:11.549848 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tq5rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mnjz6_openshift-marketplace(78f43ab9-6c12-4d67-9c33-b186ebcef93c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 08 14:26:11 crc kubenswrapper[4624]: E1008 14:26:11.551786 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mnjz6" podUID="78f43ab9-6c12-4d67-9c33-b186ebcef93c" Oct 08 14:26:12 crc kubenswrapper[4624]: I1008 14:26:12.231794 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-kl87s" event={"ID":"60fc8c95-75b8-4032-b407-c0b21022da37","Type":"ContainerStarted","Data":"fa96195c5b58bad5a9d27264e0ac9ff6eb3998f85e11ec960f601eacfd804080"} Oct 08 14:26:12 crc kubenswrapper[4624]: I1008 14:26:12.232127 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-kl87s" Oct 08 14:26:12 crc kubenswrapper[4624]: I1008 14:26:12.232218 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-kl87s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Oct 08 14:26:12 crc kubenswrapper[4624]: I1008 14:26:12.232255 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kl87s" podUID="60fc8c95-75b8-4032-b407-c0b21022da37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Oct 08 14:26:12 crc kubenswrapper[4624]: I1008 14:26:12.233547 4624 generic.go:334] "Generic (PLEG): container finished" podID="273358a6-107e-4e31-aaff-1f825924ef2d" containerID="7decd4496383902cb7274660e8e7c248fab0afe8387c03491a4654870c2dbc46" exitCode=0 Oct 08 14:26:12 crc kubenswrapper[4624]: I1008 14:26:12.233602 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbx9m" event={"ID":"273358a6-107e-4e31-aaff-1f825924ef2d","Type":"ContainerDied","Data":"7decd4496383902cb7274660e8e7c248fab0afe8387c03491a4654870c2dbc46"} Oct 08 14:26:12 crc kubenswrapper[4624]: I1008 14:26:12.235176 4624 generic.go:334] "Generic (PLEG): container finished" podID="76cec727-339f-4269-b0ee-d4aec3b0d6e3" containerID="89713d2099d025df5f4ce48c27336a77c88579334df72ef7397684e9651e2f5b" exitCode=0 Oct 08 14:26:12 crc kubenswrapper[4624]: I1008 14:26:12.235236 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5npx6" event={"ID":"76cec727-339f-4269-b0ee-d4aec3b0d6e3","Type":"ContainerDied","Data":"89713d2099d025df5f4ce48c27336a77c88579334df72ef7397684e9651e2f5b"} Oct 08 14:26:12 crc kubenswrapper[4624]: I1008 14:26:12.240022 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" event={"ID":"8abf38af-8df3-49f9-9817-b4740d2a8b4a","Type":"ContainerStarted","Data":"81ea1cf903d9e2dcb6cc65d71ca34491b8fa92c389805c4bd397f4ad8d719f94"} Oct 08 14:26:12 crc kubenswrapper[4624]: I1008 14:26:12.240045 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" event={"ID":"8abf38af-8df3-49f9-9817-b4740d2a8b4a","Type":"ContainerStarted","Data":"7927bfbdf5ab3edffafd2875daa5743577e7f1681c8d0aa0a7124cb82b9d943f"} Oct 08 14:26:12 crc kubenswrapper[4624]: I1008 14:26:12.240057 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qrmz6" event={"ID":"8abf38af-8df3-49f9-9817-b4740d2a8b4a","Type":"ContainerStarted","Data":"72347f712935ac58fe6473800bbaa8140d5280fd746de06d4d2f5aa7b41a0f78"} Oct 08 14:26:12 crc kubenswrapper[4624]: I1008 14:26:12.245012 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8da3440-b5d4-46cc-a1ea-d3e4b7f03269" containerID="1ebc565f44a44e3fbce70b17f79a4b490287fb4b1c70d5404fecb2347312fbf4" exitCode=0 Oct 08 14:26:12 crc kubenswrapper[4624]: I1008 14:26:12.245833 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sft5x" event={"ID":"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269","Type":"ContainerDied","Data":"1ebc565f44a44e3fbce70b17f79a4b490287fb4b1c70d5404fecb2347312fbf4"} Oct 08 14:26:12 crc kubenswrapper[4624]: E1008 14:26:12.247969 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mnjz6" podUID="78f43ab9-6c12-4d67-9c33-b186ebcef93c" Oct 08 14:26:12 crc kubenswrapper[4624]: E1008 14:26:12.248140 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qgsc5" podUID="e17b3f85-b77b-4639-9993-66db89499fcf" Oct 08 14:26:12 crc kubenswrapper[4624]: E1008 14:26:12.255842 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9jhqk" podUID="7cd6bd92-2cce-46a2-881d-57e97e9b00bc" Oct 08 14:26:12 crc kubenswrapper[4624]: I1008 14:26:12.337903 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-qrmz6" podStartSLOduration=176.337885813 podStartE2EDuration="2m56.337885813s" podCreationTimestamp="2025-10-08 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:26:12.33672432 +0000 UTC m=+197.487659407" watchObservedRunningTime="2025-10-08 14:26:12.337885813 +0000 UTC m=+197.488820890" Oct 08 14:26:13 crc kubenswrapper[4624]: I1008 14:26:13.252046 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sft5x" event={"ID":"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269","Type":"ContainerStarted","Data":"a7581979e5e935c28d058fbd03efea4593c12d5f35c58b7baffcd82215f48370"} Oct 08 14:26:13 crc kubenswrapper[4624]: I1008 14:26:13.254247 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbx9m" event={"ID":"273358a6-107e-4e31-aaff-1f825924ef2d","Type":"ContainerStarted","Data":"bafb8e1b3fc9ea46ef7e0f65bd021d2b02bb2dffdf9eb515fe5308b90ba6870f"} Oct 08 14:26:13 crc kubenswrapper[4624]: I1008 14:26:13.258238 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5npx6" event={"ID":"76cec727-339f-4269-b0ee-d4aec3b0d6e3","Type":"ContainerStarted","Data":"99efd5989e34a9dae3b4f6772f064452cf3a28e5cc8f639f66944e52f5c2f7c7"} Oct 08 14:26:13 crc kubenswrapper[4624]: I1008 14:26:13.259147 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-kl87s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Oct 08 14:26:13 crc kubenswrapper[4624]: I1008 14:26:13.259191 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kl87s" podUID="60fc8c95-75b8-4032-b407-c0b21022da37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Oct 08 14:26:13 crc kubenswrapper[4624]: I1008 14:26:13.275507 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sft5x" podStartSLOduration=4.327131824 podStartE2EDuration="45.272343354s" podCreationTimestamp="2025-10-08 14:25:28 +0000 UTC" firstStartedPulling="2025-10-08 14:25:31.723558742 +0000 UTC m=+156.874493809" lastFinishedPulling="2025-10-08 14:26:12.668770262 +0000 UTC m=+197.819705339" observedRunningTime="2025-10-08 14:26:13.27118104 +0000 UTC m=+198.422116117" watchObservedRunningTime="2025-10-08 14:26:13.272343354 +0000 UTC m=+198.423278431" Oct 08 14:26:14 crc kubenswrapper[4624]: I1008 14:26:14.283885 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lbx9m" podStartSLOduration=6.141774707 podStartE2EDuration="48.283838956s" podCreationTimestamp="2025-10-08 14:25:26 +0000 UTC" firstStartedPulling="2025-10-08 14:25:30.679048178 +0000 UTC m=+155.829983255" lastFinishedPulling="2025-10-08 14:26:12.821112417 +0000 UTC m=+197.972047504" observedRunningTime="2025-10-08 14:26:13.287835193 +0000 UTC m=+198.438770270" watchObservedRunningTime="2025-10-08 14:26:14.283838956 +0000 UTC m=+199.434774033" Oct 08 14:26:14 crc kubenswrapper[4624]: I1008 14:26:14.285049 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5npx6" podStartSLOduration=6.198762758 podStartE2EDuration="47.285043581s" podCreationTimestamp="2025-10-08 14:25:27 +0000 UTC" firstStartedPulling="2025-10-08 14:25:31.730312773 +0000 UTC m=+156.881247850" lastFinishedPulling="2025-10-08 14:26:12.816593596 +0000 UTC m=+197.967528673" observedRunningTime="2025-10-08 14:26:14.282979712 +0000 UTC m=+199.433914789" watchObservedRunningTime="2025-10-08 14:26:14.285043581 +0000 UTC m=+199.435978668" Oct 08 14:26:15 crc kubenswrapper[4624]: I1008 14:26:15.551075 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-kl87s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Oct 08 14:26:15 crc kubenswrapper[4624]: I1008 14:26:15.551111 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-kl87s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Oct 08 14:26:15 crc kubenswrapper[4624]: I1008 14:26:15.551801 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-kl87s" podUID="60fc8c95-75b8-4032-b407-c0b21022da37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Oct 08 14:26:15 crc kubenswrapper[4624]: I1008 14:26:15.551866 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-kl87s" podUID="60fc8c95-75b8-4032-b407-c0b21022da37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Oct 08 14:26:16 crc kubenswrapper[4624]: I1008 14:26:16.604101 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lbx9m" Oct 08 14:26:16 crc kubenswrapper[4624]: I1008 14:26:16.604153 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lbx9m" Oct 08 14:26:16 crc kubenswrapper[4624]: I1008 14:26:16.992558 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lbx9m" Oct 08 14:26:18 crc kubenswrapper[4624]: I1008 14:26:18.328536 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5npx6" Oct 08 14:26:18 crc kubenswrapper[4624]: I1008 14:26:18.328680 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5npx6" Oct 08 14:26:18 crc kubenswrapper[4624]: I1008 14:26:18.363883 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5npx6" Oct 08 14:26:18 crc kubenswrapper[4624]: I1008 14:26:18.593549 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sft5x" Oct 08 14:26:18 crc kubenswrapper[4624]: I1008 14:26:18.593688 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sft5x" Oct 08 14:26:18 crc kubenswrapper[4624]: I1008 14:26:18.635261 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sft5x" Oct 08 14:26:19 crc kubenswrapper[4624]: I1008 14:26:19.324214 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sft5x" Oct 08 14:26:19 crc kubenswrapper[4624]: I1008 14:26:19.329221 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5npx6" Oct 08 14:26:20 crc kubenswrapper[4624]: I1008 14:26:20.475224 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sft5x"] Oct 08 14:26:22 crc kubenswrapper[4624]: I1008 14:26:22.304411 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sft5x" podUID="a8da3440-b5d4-46cc-a1ea-d3e4b7f03269" containerName="registry-server" containerID="cri-o://a7581979e5e935c28d058fbd03efea4593c12d5f35c58b7baffcd82215f48370" gracePeriod=2 Oct 08 14:26:22 crc kubenswrapper[4624]: I1008 14:26:22.962512 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sft5x" Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.070065 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269-utilities\") pod \"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269\" (UID: \"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269\") " Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.070136 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7pn2\" (UniqueName: \"kubernetes.io/projected/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269-kube-api-access-r7pn2\") pod \"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269\" (UID: \"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269\") " Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.070198 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269-catalog-content\") pod \"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269\" (UID: \"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269\") " Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.073850 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269-utilities" (OuterVolumeSpecName: "utilities") pod "a8da3440-b5d4-46cc-a1ea-d3e4b7f03269" (UID: "a8da3440-b5d4-46cc-a1ea-d3e4b7f03269"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.079903 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269-kube-api-access-r7pn2" (OuterVolumeSpecName: "kube-api-access-r7pn2") pod "a8da3440-b5d4-46cc-a1ea-d3e4b7f03269" (UID: "a8da3440-b5d4-46cc-a1ea-d3e4b7f03269"). InnerVolumeSpecName "kube-api-access-r7pn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.084583 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8da3440-b5d4-46cc-a1ea-d3e4b7f03269" (UID: "a8da3440-b5d4-46cc-a1ea-d3e4b7f03269"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.171909 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7pn2\" (UniqueName: \"kubernetes.io/projected/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269-kube-api-access-r7pn2\") on node \"crc\" DevicePath \"\"" Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.172182 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.172254 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.311984 4624 generic.go:334] "Generic (PLEG): container finished" podID="a25349b6-d167-4846-884c-9c057b5c6491" containerID="f3069e8e9af689137eab6f5b86ee3b5dcbc7ac3078096d37cc6b681f33f4b563" exitCode=0 Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.312051 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wt4rs" event={"ID":"a25349b6-d167-4846-884c-9c057b5c6491","Type":"ContainerDied","Data":"f3069e8e9af689137eab6f5b86ee3b5dcbc7ac3078096d37cc6b681f33f4b563"} Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.316702 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8da3440-b5d4-46cc-a1ea-d3e4b7f03269" containerID="a7581979e5e935c28d058fbd03efea4593c12d5f35c58b7baffcd82215f48370" exitCode=0 Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.316826 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sft5x" event={"ID":"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269","Type":"ContainerDied","Data":"a7581979e5e935c28d058fbd03efea4593c12d5f35c58b7baffcd82215f48370"} Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.317088 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sft5x" event={"ID":"a8da3440-b5d4-46cc-a1ea-d3e4b7f03269","Type":"ContainerDied","Data":"bafb6ae460710927160355ec68a77fcf47c2dd770bbd830f6039c30aa2390451"} Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.317188 4624 scope.go:117] "RemoveContainer" containerID="a7581979e5e935c28d058fbd03efea4593c12d5f35c58b7baffcd82215f48370" Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.316923 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sft5x" Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.358326 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sft5x"] Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.361980 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sft5x"] Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.365118 4624 scope.go:117] "RemoveContainer" containerID="1ebc565f44a44e3fbce70b17f79a4b490287fb4b1c70d5404fecb2347312fbf4" Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.380896 4624 scope.go:117] "RemoveContainer" containerID="7105753a4b4d5202c13a4ac6ba50af22f6e04665245c83b9ce4d6eec22c7cb3b" Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.395089 4624 scope.go:117] "RemoveContainer" containerID="a7581979e5e935c28d058fbd03efea4593c12d5f35c58b7baffcd82215f48370" Oct 08 14:26:23 crc kubenswrapper[4624]: E1008 14:26:23.395756 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7581979e5e935c28d058fbd03efea4593c12d5f35c58b7baffcd82215f48370\": container with ID starting with a7581979e5e935c28d058fbd03efea4593c12d5f35c58b7baffcd82215f48370 not found: ID does not exist" containerID="a7581979e5e935c28d058fbd03efea4593c12d5f35c58b7baffcd82215f48370" Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.395802 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7581979e5e935c28d058fbd03efea4593c12d5f35c58b7baffcd82215f48370"} err="failed to get container status \"a7581979e5e935c28d058fbd03efea4593c12d5f35c58b7baffcd82215f48370\": rpc error: code = NotFound desc = could not find container \"a7581979e5e935c28d058fbd03efea4593c12d5f35c58b7baffcd82215f48370\": container with ID starting with a7581979e5e935c28d058fbd03efea4593c12d5f35c58b7baffcd82215f48370 not found: ID does not exist" Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.395833 4624 scope.go:117] "RemoveContainer" containerID="1ebc565f44a44e3fbce70b17f79a4b490287fb4b1c70d5404fecb2347312fbf4" Oct 08 14:26:23 crc kubenswrapper[4624]: E1008 14:26:23.396265 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ebc565f44a44e3fbce70b17f79a4b490287fb4b1c70d5404fecb2347312fbf4\": container with ID starting with 1ebc565f44a44e3fbce70b17f79a4b490287fb4b1c70d5404fecb2347312fbf4 not found: ID does not exist" containerID="1ebc565f44a44e3fbce70b17f79a4b490287fb4b1c70d5404fecb2347312fbf4" Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.396291 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ebc565f44a44e3fbce70b17f79a4b490287fb4b1c70d5404fecb2347312fbf4"} err="failed to get container status \"1ebc565f44a44e3fbce70b17f79a4b490287fb4b1c70d5404fecb2347312fbf4\": rpc error: code = NotFound desc = could not find container \"1ebc565f44a44e3fbce70b17f79a4b490287fb4b1c70d5404fecb2347312fbf4\": container with ID starting with 1ebc565f44a44e3fbce70b17f79a4b490287fb4b1c70d5404fecb2347312fbf4 not found: ID does not exist" Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.396308 4624 scope.go:117] "RemoveContainer" containerID="7105753a4b4d5202c13a4ac6ba50af22f6e04665245c83b9ce4d6eec22c7cb3b" Oct 08 14:26:23 crc kubenswrapper[4624]: E1008 14:26:23.396767 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7105753a4b4d5202c13a4ac6ba50af22f6e04665245c83b9ce4d6eec22c7cb3b\": container with ID starting with 7105753a4b4d5202c13a4ac6ba50af22f6e04665245c83b9ce4d6eec22c7cb3b not found: ID does not exist" containerID="7105753a4b4d5202c13a4ac6ba50af22f6e04665245c83b9ce4d6eec22c7cb3b" Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.396823 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7105753a4b4d5202c13a4ac6ba50af22f6e04665245c83b9ce4d6eec22c7cb3b"} err="failed to get container status \"7105753a4b4d5202c13a4ac6ba50af22f6e04665245c83b9ce4d6eec22c7cb3b\": rpc error: code = NotFound desc = could not find container \"7105753a4b4d5202c13a4ac6ba50af22f6e04665245c83b9ce4d6eec22c7cb3b\": container with ID starting with 7105753a4b4d5202c13a4ac6ba50af22f6e04665245c83b9ce4d6eec22c7cb3b not found: ID does not exist" Oct 08 14:26:23 crc kubenswrapper[4624]: I1008 14:26:23.474656 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8da3440-b5d4-46cc-a1ea-d3e4b7f03269" path="/var/lib/kubelet/pods/a8da3440-b5d4-46cc-a1ea-d3e4b7f03269/volumes" Oct 08 14:26:24 crc kubenswrapper[4624]: I1008 14:26:24.324602 4624 generic.go:334] "Generic (PLEG): container finished" podID="7cd6bd92-2cce-46a2-881d-57e97e9b00bc" containerID="d6aca3e51a725ec0f94f50068eac27b17f305f9df92090ddb01c8ddbe55a0b06" exitCode=0 Oct 08 14:26:24 crc kubenswrapper[4624]: I1008 14:26:24.324662 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jhqk" event={"ID":"7cd6bd92-2cce-46a2-881d-57e97e9b00bc","Type":"ContainerDied","Data":"d6aca3e51a725ec0f94f50068eac27b17f305f9df92090ddb01c8ddbe55a0b06"} Oct 08 14:26:24 crc kubenswrapper[4624]: I1008 14:26:24.330394 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wt4rs" event={"ID":"a25349b6-d167-4846-884c-9c057b5c6491","Type":"ContainerStarted","Data":"857becc4ab790c5f1a90b273946c23a88c930d2b2e9643589dc4452c0205f8c7"} Oct 08 14:26:24 crc kubenswrapper[4624]: I1008 14:26:24.365552 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wt4rs" podStartSLOduration=5.182222037 podStartE2EDuration="59.365525934s" podCreationTimestamp="2025-10-08 14:25:25 +0000 UTC" firstStartedPulling="2025-10-08 14:25:29.60157514 +0000 UTC m=+154.752510217" lastFinishedPulling="2025-10-08 14:26:23.784879037 +0000 UTC m=+208.935814114" observedRunningTime="2025-10-08 14:26:24.36329761 +0000 UTC m=+209.514232707" watchObservedRunningTime="2025-10-08 14:26:24.365525934 +0000 UTC m=+209.516461011" Oct 08 14:26:25 crc kubenswrapper[4624]: I1008 14:26:25.338760 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jhqk" event={"ID":"7cd6bd92-2cce-46a2-881d-57e97e9b00bc","Type":"ContainerStarted","Data":"01a6a6891df03e20b7abc5953d18a5d523691f4872e4f814c581a7e38aba3941"} Oct 08 14:26:25 crc kubenswrapper[4624]: I1008 14:26:25.340166 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mwws" event={"ID":"e21258b9-106f-4b15-aa2c-7e65598341c2","Type":"ContainerStarted","Data":"2d8293d83c301a9b80190af6cf1c78fcadc2b004216bece85fe910aebc555a58"} Oct 08 14:26:25 crc kubenswrapper[4624]: I1008 14:26:25.379064 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9jhqk" podStartSLOduration=4.39627798 podStartE2EDuration="56.379044935s" podCreationTimestamp="2025-10-08 14:25:29 +0000 UTC" firstStartedPulling="2025-10-08 14:25:32.750106964 +0000 UTC m=+157.901042041" lastFinishedPulling="2025-10-08 14:26:24.732873909 +0000 UTC m=+209.883808996" observedRunningTime="2025-10-08 14:26:25.357682346 +0000 UTC m=+210.508617443" watchObservedRunningTime="2025-10-08 14:26:25.379044935 +0000 UTC m=+210.529980002" Oct 08 14:26:25 crc kubenswrapper[4624]: I1008 14:26:25.555720 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-kl87s" Oct 08 14:26:26 crc kubenswrapper[4624]: I1008 14:26:26.014395 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wt4rs" Oct 08 14:26:26 crc kubenswrapper[4624]: I1008 14:26:26.014449 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wt4rs" Oct 08 14:26:26 crc kubenswrapper[4624]: I1008 14:26:26.051799 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wt4rs" Oct 08 14:26:26 crc kubenswrapper[4624]: I1008 14:26:26.348869 4624 generic.go:334] "Generic (PLEG): container finished" podID="e21258b9-106f-4b15-aa2c-7e65598341c2" containerID="2d8293d83c301a9b80190af6cf1c78fcadc2b004216bece85fe910aebc555a58" exitCode=0 Oct 08 14:26:26 crc kubenswrapper[4624]: I1008 14:26:26.348937 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mwws" event={"ID":"e21258b9-106f-4b15-aa2c-7e65598341c2","Type":"ContainerDied","Data":"2d8293d83c301a9b80190af6cf1c78fcadc2b004216bece85fe910aebc555a58"} Oct 08 14:26:26 crc kubenswrapper[4624]: I1008 14:26:26.642250 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lbx9m" Oct 08 14:26:27 crc kubenswrapper[4624]: E1008 14:26:27.234390 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78f43ab9_6c12_4d67_9c33_b186ebcef93c.slice/crio-7f6422f13f60736704c9faa9db148924fbe216d9bd09ebbbe224b817830d6d20.scope\": RecentStats: unable to find data in memory cache]" Oct 08 14:26:27 crc kubenswrapper[4624]: I1008 14:26:27.357544 4624 generic.go:334] "Generic (PLEG): container finished" podID="78f43ab9-6c12-4d67-9c33-b186ebcef93c" containerID="7f6422f13f60736704c9faa9db148924fbe216d9bd09ebbbe224b817830d6d20" exitCode=0 Oct 08 14:26:27 crc kubenswrapper[4624]: I1008 14:26:27.357593 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mnjz6" event={"ID":"78f43ab9-6c12-4d67-9c33-b186ebcef93c","Type":"ContainerDied","Data":"7f6422f13f60736704c9faa9db148924fbe216d9bd09ebbbe224b817830d6d20"} Oct 08 14:26:28 crc kubenswrapper[4624]: I1008 14:26:28.365770 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mwws" event={"ID":"e21258b9-106f-4b15-aa2c-7e65598341c2","Type":"ContainerStarted","Data":"4250685d2388500ae6c7cb5a736021d82d74c900b0e112026edaf6d10c99182d"} Oct 08 14:26:28 crc kubenswrapper[4624]: I1008 14:26:28.389152 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7mwws" podStartSLOduration=6.088205921 podStartE2EDuration="1m0.389130277s" podCreationTimestamp="2025-10-08 14:25:28 +0000 UTC" firstStartedPulling="2025-10-08 14:25:32.750263918 +0000 UTC m=+157.901198995" lastFinishedPulling="2025-10-08 14:26:27.051188264 +0000 UTC m=+212.202123351" observedRunningTime="2025-10-08 14:26:28.386357737 +0000 UTC m=+213.537292824" watchObservedRunningTime="2025-10-08 14:26:28.389130277 +0000 UTC m=+213.540065354" Oct 08 14:26:29 crc kubenswrapper[4624]: I1008 14:26:29.072351 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lbx9m"] Oct 08 14:26:29 crc kubenswrapper[4624]: I1008 14:26:29.072568 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lbx9m" podUID="273358a6-107e-4e31-aaff-1f825924ef2d" containerName="registry-server" containerID="cri-o://bafb8e1b3fc9ea46ef7e0f65bd021d2b02bb2dffdf9eb515fe5308b90ba6870f" gracePeriod=2 Oct 08 14:26:29 crc kubenswrapper[4624]: I1008 14:26:29.246052 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7mwws" Oct 08 14:26:29 crc kubenswrapper[4624]: I1008 14:26:29.246095 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7mwws" Oct 08 14:26:29 crc kubenswrapper[4624]: I1008 14:26:29.658879 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9jhqk" Oct 08 14:26:29 crc kubenswrapper[4624]: I1008 14:26:29.659979 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9jhqk" Oct 08 14:26:29 crc kubenswrapper[4624]: I1008 14:26:29.695925 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9jhqk" Oct 08 14:26:30 crc kubenswrapper[4624]: I1008 14:26:30.076309 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:26:30 crc kubenswrapper[4624]: I1008 14:26:30.076364 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:26:30 crc kubenswrapper[4624]: I1008 14:26:30.076403 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:26:30 crc kubenswrapper[4624]: I1008 14:26:30.076913 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:26:30 crc kubenswrapper[4624]: I1008 14:26:30.076995 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e" gracePeriod=600 Oct 08 14:26:30 crc kubenswrapper[4624]: I1008 14:26:30.280427 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7mwws" podUID="e21258b9-106f-4b15-aa2c-7e65598341c2" containerName="registry-server" probeResult="failure" output=< Oct 08 14:26:30 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 14:26:30 crc kubenswrapper[4624]: > Oct 08 14:26:30 crc kubenswrapper[4624]: I1008 14:26:30.377231 4624 generic.go:334] "Generic (PLEG): container finished" podID="273358a6-107e-4e31-aaff-1f825924ef2d" containerID="bafb8e1b3fc9ea46ef7e0f65bd021d2b02bb2dffdf9eb515fe5308b90ba6870f" exitCode=0 Oct 08 14:26:30 crc kubenswrapper[4624]: I1008 14:26:30.377312 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbx9m" event={"ID":"273358a6-107e-4e31-aaff-1f825924ef2d","Type":"ContainerDied","Data":"bafb8e1b3fc9ea46ef7e0f65bd021d2b02bb2dffdf9eb515fe5308b90ba6870f"} Oct 08 14:26:30 crc kubenswrapper[4624]: I1008 14:26:30.413788 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9jhqk" Oct 08 14:26:30 crc kubenswrapper[4624]: I1008 14:26:30.962765 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lbx9m" Oct 08 14:26:31 crc kubenswrapper[4624]: I1008 14:26:31.101997 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273358a6-107e-4e31-aaff-1f825924ef2d-utilities\") pod \"273358a6-107e-4e31-aaff-1f825924ef2d\" (UID: \"273358a6-107e-4e31-aaff-1f825924ef2d\") " Oct 08 14:26:31 crc kubenswrapper[4624]: I1008 14:26:31.102424 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwzzb\" (UniqueName: \"kubernetes.io/projected/273358a6-107e-4e31-aaff-1f825924ef2d-kube-api-access-fwzzb\") pod \"273358a6-107e-4e31-aaff-1f825924ef2d\" (UID: \"273358a6-107e-4e31-aaff-1f825924ef2d\") " Oct 08 14:26:31 crc kubenswrapper[4624]: I1008 14:26:31.102479 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273358a6-107e-4e31-aaff-1f825924ef2d-catalog-content\") pod \"273358a6-107e-4e31-aaff-1f825924ef2d\" (UID: \"273358a6-107e-4e31-aaff-1f825924ef2d\") " Oct 08 14:26:31 crc kubenswrapper[4624]: I1008 14:26:31.102714 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/273358a6-107e-4e31-aaff-1f825924ef2d-utilities" (OuterVolumeSpecName: "utilities") pod "273358a6-107e-4e31-aaff-1f825924ef2d" (UID: "273358a6-107e-4e31-aaff-1f825924ef2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:26:31 crc kubenswrapper[4624]: I1008 14:26:31.109852 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/273358a6-107e-4e31-aaff-1f825924ef2d-kube-api-access-fwzzb" (OuterVolumeSpecName: "kube-api-access-fwzzb") pod "273358a6-107e-4e31-aaff-1f825924ef2d" (UID: "273358a6-107e-4e31-aaff-1f825924ef2d"). InnerVolumeSpecName "kube-api-access-fwzzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:26:31 crc kubenswrapper[4624]: I1008 14:26:31.151897 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/273358a6-107e-4e31-aaff-1f825924ef2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "273358a6-107e-4e31-aaff-1f825924ef2d" (UID: "273358a6-107e-4e31-aaff-1f825924ef2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:26:31 crc kubenswrapper[4624]: I1008 14:26:31.205009 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273358a6-107e-4e31-aaff-1f825924ef2d-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:26:31 crc kubenswrapper[4624]: I1008 14:26:31.205047 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwzzb\" (UniqueName: \"kubernetes.io/projected/273358a6-107e-4e31-aaff-1f825924ef2d-kube-api-access-fwzzb\") on node \"crc\" DevicePath \"\"" Oct 08 14:26:31 crc kubenswrapper[4624]: I1008 14:26:31.205059 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273358a6-107e-4e31-aaff-1f825924ef2d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:26:31 crc kubenswrapper[4624]: I1008 14:26:31.384001 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbx9m" event={"ID":"273358a6-107e-4e31-aaff-1f825924ef2d","Type":"ContainerDied","Data":"fe73e23ce89f2a44b9452c93b05e8131c0fdfe473a7cdce6d9bb644f6530039f"} Oct 08 14:26:31 crc kubenswrapper[4624]: I1008 14:26:31.384043 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lbx9m" Oct 08 14:26:31 crc kubenswrapper[4624]: I1008 14:26:31.384061 4624 scope.go:117] "RemoveContainer" containerID="bafb8e1b3fc9ea46ef7e0f65bd021d2b02bb2dffdf9eb515fe5308b90ba6870f" Oct 08 14:26:31 crc kubenswrapper[4624]: I1008 14:26:31.389370 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e" exitCode=0 Oct 08 14:26:31 crc kubenswrapper[4624]: I1008 14:26:31.389442 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e"} Oct 08 14:26:31 crc kubenswrapper[4624]: I1008 14:26:31.413216 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lbx9m"] Oct 08 14:26:31 crc kubenswrapper[4624]: I1008 14:26:31.417195 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lbx9m"] Oct 08 14:26:31 crc kubenswrapper[4624]: I1008 14:26:31.472434 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="273358a6-107e-4e31-aaff-1f825924ef2d" path="/var/lib/kubelet/pods/273358a6-107e-4e31-aaff-1f825924ef2d/volumes" Oct 08 14:26:31 crc kubenswrapper[4624]: I1008 14:26:31.476539 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9jhqk"] Oct 08 14:26:32 crc kubenswrapper[4624]: I1008 14:26:32.597436 4624 scope.go:117] "RemoveContainer" containerID="7decd4496383902cb7274660e8e7c248fab0afe8387c03491a4654870c2dbc46" Oct 08 14:26:33 crc kubenswrapper[4624]: I1008 14:26:33.400774 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9jhqk" podUID="7cd6bd92-2cce-46a2-881d-57e97e9b00bc" containerName="registry-server" containerID="cri-o://01a6a6891df03e20b7abc5953d18a5d523691f4872e4f814c581a7e38aba3941" gracePeriod=2 Oct 08 14:26:34 crc kubenswrapper[4624]: I1008 14:26:34.099257 4624 scope.go:117] "RemoveContainer" containerID="753c125f23dc2127485d70f25c7c8260a15723bc0f447ae28827be4255dae2d0" Oct 08 14:26:35 crc kubenswrapper[4624]: I1008 14:26:35.413223 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"d91f9aebe012c6850541ed64b094784a1cd737f36d0fb5aa87455cec8f6bdb54"} Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.064141 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wt4rs" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.333187 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jhqk" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.419446 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mnjz6" event={"ID":"78f43ab9-6c12-4d67-9c33-b186ebcef93c","Type":"ContainerStarted","Data":"5cb57667a0c497f3babb4f09d750db93dab0d1fefc7355aec543824d85d9329a"} Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.422753 4624 generic.go:334] "Generic (PLEG): container finished" podID="e17b3f85-b77b-4639-9993-66db89499fcf" containerID="630bada73773c1413064690c957ec3da7f39d7decbfcbd904a5ea3a9a6a5b2fa" exitCode=0 Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.422826 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgsc5" event={"ID":"e17b3f85-b77b-4639-9993-66db89499fcf","Type":"ContainerDied","Data":"630bada73773c1413064690c957ec3da7f39d7decbfcbd904a5ea3a9a6a5b2fa"} Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.425914 4624 generic.go:334] "Generic (PLEG): container finished" podID="7cd6bd92-2cce-46a2-881d-57e97e9b00bc" containerID="01a6a6891df03e20b7abc5953d18a5d523691f4872e4f814c581a7e38aba3941" exitCode=0 Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.426197 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jhqk" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.426225 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jhqk" event={"ID":"7cd6bd92-2cce-46a2-881d-57e97e9b00bc","Type":"ContainerDied","Data":"01a6a6891df03e20b7abc5953d18a5d523691f4872e4f814c581a7e38aba3941"} Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.426269 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jhqk" event={"ID":"7cd6bd92-2cce-46a2-881d-57e97e9b00bc","Type":"ContainerDied","Data":"5ed6019b779331875019f4f2ade036a3f5931593788cb8a19c2adff12efa294c"} Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.426297 4624 scope.go:117] "RemoveContainer" containerID="01a6a6891df03e20b7abc5953d18a5d523691f4872e4f814c581a7e38aba3941" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.445133 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mnjz6" podStartSLOduration=6.9693644809999995 podStartE2EDuration="1m11.4451148s" podCreationTimestamp="2025-10-08 14:25:25 +0000 UTC" firstStartedPulling="2025-10-08 14:25:29.62358018 +0000 UTC m=+154.774515257" lastFinishedPulling="2025-10-08 14:26:34.099330499 +0000 UTC m=+219.250265576" observedRunningTime="2025-10-08 14:26:36.440502476 +0000 UTC m=+221.591437563" watchObservedRunningTime="2025-10-08 14:26:36.4451148 +0000 UTC m=+221.596049877" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.448524 4624 scope.go:117] "RemoveContainer" containerID="d6aca3e51a725ec0f94f50068eac27b17f305f9df92090ddb01c8ddbe55a0b06" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.464597 4624 scope.go:117] "RemoveContainer" containerID="72ed6bfa3173c450433e831ce177db22a9b58ac9b068d0433e974d0d6d41b110" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.469518 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cd6bd92-2cce-46a2-881d-57e97e9b00bc-utilities\") pod \"7cd6bd92-2cce-46a2-881d-57e97e9b00bc\" (UID: \"7cd6bd92-2cce-46a2-881d-57e97e9b00bc\") " Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.469584 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwvsh\" (UniqueName: \"kubernetes.io/projected/7cd6bd92-2cce-46a2-881d-57e97e9b00bc-kube-api-access-lwvsh\") pod \"7cd6bd92-2cce-46a2-881d-57e97e9b00bc\" (UID: \"7cd6bd92-2cce-46a2-881d-57e97e9b00bc\") " Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.469628 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cd6bd92-2cce-46a2-881d-57e97e9b00bc-catalog-content\") pod \"7cd6bd92-2cce-46a2-881d-57e97e9b00bc\" (UID: \"7cd6bd92-2cce-46a2-881d-57e97e9b00bc\") " Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.474878 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cd6bd92-2cce-46a2-881d-57e97e9b00bc-utilities" (OuterVolumeSpecName: "utilities") pod "7cd6bd92-2cce-46a2-881d-57e97e9b00bc" (UID: "7cd6bd92-2cce-46a2-881d-57e97e9b00bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.480171 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd6bd92-2cce-46a2-881d-57e97e9b00bc-kube-api-access-lwvsh" (OuterVolumeSpecName: "kube-api-access-lwvsh") pod "7cd6bd92-2cce-46a2-881d-57e97e9b00bc" (UID: "7cd6bd92-2cce-46a2-881d-57e97e9b00bc"). InnerVolumeSpecName "kube-api-access-lwvsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.486766 4624 scope.go:117] "RemoveContainer" containerID="01a6a6891df03e20b7abc5953d18a5d523691f4872e4f814c581a7e38aba3941" Oct 08 14:26:36 crc kubenswrapper[4624]: E1008 14:26:36.488557 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01a6a6891df03e20b7abc5953d18a5d523691f4872e4f814c581a7e38aba3941\": container with ID starting with 01a6a6891df03e20b7abc5953d18a5d523691f4872e4f814c581a7e38aba3941 not found: ID does not exist" containerID="01a6a6891df03e20b7abc5953d18a5d523691f4872e4f814c581a7e38aba3941" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.488583 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01a6a6891df03e20b7abc5953d18a5d523691f4872e4f814c581a7e38aba3941"} err="failed to get container status \"01a6a6891df03e20b7abc5953d18a5d523691f4872e4f814c581a7e38aba3941\": rpc error: code = NotFound desc = could not find container \"01a6a6891df03e20b7abc5953d18a5d523691f4872e4f814c581a7e38aba3941\": container with ID starting with 01a6a6891df03e20b7abc5953d18a5d523691f4872e4f814c581a7e38aba3941 not found: ID does not exist" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.488602 4624 scope.go:117] "RemoveContainer" containerID="d6aca3e51a725ec0f94f50068eac27b17f305f9df92090ddb01c8ddbe55a0b06" Oct 08 14:26:36 crc kubenswrapper[4624]: E1008 14:26:36.488853 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6aca3e51a725ec0f94f50068eac27b17f305f9df92090ddb01c8ddbe55a0b06\": container with ID starting with d6aca3e51a725ec0f94f50068eac27b17f305f9df92090ddb01c8ddbe55a0b06 not found: ID does not exist" containerID="d6aca3e51a725ec0f94f50068eac27b17f305f9df92090ddb01c8ddbe55a0b06" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.488871 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6aca3e51a725ec0f94f50068eac27b17f305f9df92090ddb01c8ddbe55a0b06"} err="failed to get container status \"d6aca3e51a725ec0f94f50068eac27b17f305f9df92090ddb01c8ddbe55a0b06\": rpc error: code = NotFound desc = could not find container \"d6aca3e51a725ec0f94f50068eac27b17f305f9df92090ddb01c8ddbe55a0b06\": container with ID starting with d6aca3e51a725ec0f94f50068eac27b17f305f9df92090ddb01c8ddbe55a0b06 not found: ID does not exist" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.489020 4624 scope.go:117] "RemoveContainer" containerID="72ed6bfa3173c450433e831ce177db22a9b58ac9b068d0433e974d0d6d41b110" Oct 08 14:26:36 crc kubenswrapper[4624]: E1008 14:26:36.489312 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72ed6bfa3173c450433e831ce177db22a9b58ac9b068d0433e974d0d6d41b110\": container with ID starting with 72ed6bfa3173c450433e831ce177db22a9b58ac9b068d0433e974d0d6d41b110 not found: ID does not exist" containerID="72ed6bfa3173c450433e831ce177db22a9b58ac9b068d0433e974d0d6d41b110" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.489360 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72ed6bfa3173c450433e831ce177db22a9b58ac9b068d0433e974d0d6d41b110"} err="failed to get container status \"72ed6bfa3173c450433e831ce177db22a9b58ac9b068d0433e974d0d6d41b110\": rpc error: code = NotFound desc = could not find container \"72ed6bfa3173c450433e831ce177db22a9b58ac9b068d0433e974d0d6d41b110\": container with ID starting with 72ed6bfa3173c450433e831ce177db22a9b58ac9b068d0433e974d0d6d41b110 not found: ID does not exist" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.571677 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cd6bd92-2cce-46a2-881d-57e97e9b00bc-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.571712 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwvsh\" (UniqueName: \"kubernetes.io/projected/7cd6bd92-2cce-46a2-881d-57e97e9b00bc-kube-api-access-lwvsh\") on node \"crc\" DevicePath \"\"" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.581580 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cd6bd92-2cce-46a2-881d-57e97e9b00bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7cd6bd92-2cce-46a2-881d-57e97e9b00bc" (UID: "7cd6bd92-2cce-46a2-881d-57e97e9b00bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.672425 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cd6bd92-2cce-46a2-881d-57e97e9b00bc-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.752940 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9jhqk"] Oct 08 14:26:36 crc kubenswrapper[4624]: I1008 14:26:36.773805 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9jhqk"] Oct 08 14:26:37 crc kubenswrapper[4624]: I1008 14:26:37.434218 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgsc5" event={"ID":"e17b3f85-b77b-4639-9993-66db89499fcf","Type":"ContainerStarted","Data":"98622fa4911dfcd24111a80a42c721118846f07075488456df8d2c2d7d2cebc6"} Oct 08 14:26:37 crc kubenswrapper[4624]: I1008 14:26:37.457071 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qgsc5" podStartSLOduration=4.026423073 podStartE2EDuration="1m11.457052786s" podCreationTimestamp="2025-10-08 14:25:26 +0000 UTC" firstStartedPulling="2025-10-08 14:25:29.643190987 +0000 UTC m=+154.794126054" lastFinishedPulling="2025-10-08 14:26:37.07382069 +0000 UTC m=+222.224755767" observedRunningTime="2025-10-08 14:26:37.454189853 +0000 UTC m=+222.605124930" watchObservedRunningTime="2025-10-08 14:26:37.457052786 +0000 UTC m=+222.607987863" Oct 08 14:26:37 crc kubenswrapper[4624]: I1008 14:26:37.471174 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cd6bd92-2cce-46a2-881d-57e97e9b00bc" path="/var/lib/kubelet/pods/7cd6bd92-2cce-46a2-881d-57e97e9b00bc/volumes" Oct 08 14:26:39 crc kubenswrapper[4624]: I1008 14:26:39.297573 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7mwws" Oct 08 14:26:39 crc kubenswrapper[4624]: I1008 14:26:39.341229 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7mwws" Oct 08 14:26:46 crc kubenswrapper[4624]: I1008 14:26:46.190131 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mnjz6" Oct 08 14:26:46 crc kubenswrapper[4624]: I1008 14:26:46.190683 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mnjz6" Oct 08 14:26:46 crc kubenswrapper[4624]: I1008 14:26:46.226564 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mnjz6" Oct 08 14:26:46 crc kubenswrapper[4624]: I1008 14:26:46.403151 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b8zl8"] Oct 08 14:26:46 crc kubenswrapper[4624]: I1008 14:26:46.416985 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qgsc5" Oct 08 14:26:46 crc kubenswrapper[4624]: I1008 14:26:46.417030 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qgsc5" Oct 08 14:26:46 crc kubenswrapper[4624]: I1008 14:26:46.562732 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qgsc5" Oct 08 14:26:46 crc kubenswrapper[4624]: I1008 14:26:46.592240 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mnjz6" Oct 08 14:26:46 crc kubenswrapper[4624]: I1008 14:26:46.642067 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qgsc5" Oct 08 14:26:50 crc kubenswrapper[4624]: I1008 14:26:50.473390 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qgsc5"] Oct 08 14:26:50 crc kubenswrapper[4624]: I1008 14:26:50.473954 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qgsc5" podUID="e17b3f85-b77b-4639-9993-66db89499fcf" containerName="registry-server" containerID="cri-o://98622fa4911dfcd24111a80a42c721118846f07075488456df8d2c2d7d2cebc6" gracePeriod=2 Oct 08 14:26:50 crc kubenswrapper[4624]: I1008 14:26:50.804832 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qgsc5" Oct 08 14:26:50 crc kubenswrapper[4624]: I1008 14:26:50.950608 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e17b3f85-b77b-4639-9993-66db89499fcf-utilities\") pod \"e17b3f85-b77b-4639-9993-66db89499fcf\" (UID: \"e17b3f85-b77b-4639-9993-66db89499fcf\") " Oct 08 14:26:50 crc kubenswrapper[4624]: I1008 14:26:50.950728 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e17b3f85-b77b-4639-9993-66db89499fcf-catalog-content\") pod \"e17b3f85-b77b-4639-9993-66db89499fcf\" (UID: \"e17b3f85-b77b-4639-9993-66db89499fcf\") " Oct 08 14:26:50 crc kubenswrapper[4624]: I1008 14:26:50.951514 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e17b3f85-b77b-4639-9993-66db89499fcf-utilities" (OuterVolumeSpecName: "utilities") pod "e17b3f85-b77b-4639-9993-66db89499fcf" (UID: "e17b3f85-b77b-4639-9993-66db89499fcf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:26:50 crc kubenswrapper[4624]: I1008 14:26:50.951557 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwnrj\" (UniqueName: \"kubernetes.io/projected/e17b3f85-b77b-4639-9993-66db89499fcf-kube-api-access-mwnrj\") pod \"e17b3f85-b77b-4639-9993-66db89499fcf\" (UID: \"e17b3f85-b77b-4639-9993-66db89499fcf\") " Oct 08 14:26:50 crc kubenswrapper[4624]: I1008 14:26:50.951964 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e17b3f85-b77b-4639-9993-66db89499fcf-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:26:50 crc kubenswrapper[4624]: I1008 14:26:50.963107 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17b3f85-b77b-4639-9993-66db89499fcf-kube-api-access-mwnrj" (OuterVolumeSpecName: "kube-api-access-mwnrj") pod "e17b3f85-b77b-4639-9993-66db89499fcf" (UID: "e17b3f85-b77b-4639-9993-66db89499fcf"). InnerVolumeSpecName "kube-api-access-mwnrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:26:50 crc kubenswrapper[4624]: I1008 14:26:50.994893 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e17b3f85-b77b-4639-9993-66db89499fcf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e17b3f85-b77b-4639-9993-66db89499fcf" (UID: "e17b3f85-b77b-4639-9993-66db89499fcf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:26:51 crc kubenswrapper[4624]: I1008 14:26:51.052857 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwnrj\" (UniqueName: \"kubernetes.io/projected/e17b3f85-b77b-4639-9993-66db89499fcf-kube-api-access-mwnrj\") on node \"crc\" DevicePath \"\"" Oct 08 14:26:51 crc kubenswrapper[4624]: I1008 14:26:51.052888 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e17b3f85-b77b-4639-9993-66db89499fcf-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:26:51 crc kubenswrapper[4624]: I1008 14:26:51.502269 4624 generic.go:334] "Generic (PLEG): container finished" podID="e17b3f85-b77b-4639-9993-66db89499fcf" containerID="98622fa4911dfcd24111a80a42c721118846f07075488456df8d2c2d7d2cebc6" exitCode=0 Oct 08 14:26:51 crc kubenswrapper[4624]: I1008 14:26:51.502309 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgsc5" event={"ID":"e17b3f85-b77b-4639-9993-66db89499fcf","Type":"ContainerDied","Data":"98622fa4911dfcd24111a80a42c721118846f07075488456df8d2c2d7d2cebc6"} Oct 08 14:26:51 crc kubenswrapper[4624]: I1008 14:26:51.502336 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgsc5" event={"ID":"e17b3f85-b77b-4639-9993-66db89499fcf","Type":"ContainerDied","Data":"2c68f34db854b7a2ef01fb3e19917d4f20887b6aba40de6f76134628d29c1a2e"} Oct 08 14:26:51 crc kubenswrapper[4624]: I1008 14:26:51.502352 4624 scope.go:117] "RemoveContainer" containerID="98622fa4911dfcd24111a80a42c721118846f07075488456df8d2c2d7d2cebc6" Oct 08 14:26:51 crc kubenswrapper[4624]: I1008 14:26:51.502445 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qgsc5" Oct 08 14:26:51 crc kubenswrapper[4624]: I1008 14:26:51.522060 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qgsc5"] Oct 08 14:26:51 crc kubenswrapper[4624]: I1008 14:26:51.524862 4624 scope.go:117] "RemoveContainer" containerID="630bada73773c1413064690c957ec3da7f39d7decbfcbd904a5ea3a9a6a5b2fa" Oct 08 14:26:51 crc kubenswrapper[4624]: I1008 14:26:51.526974 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qgsc5"] Oct 08 14:26:51 crc kubenswrapper[4624]: I1008 14:26:51.545325 4624 scope.go:117] "RemoveContainer" containerID="dec605386a48556e6a1e10c2873b52ce22193e47b7a5df18b0abe60e0c435b3a" Oct 08 14:26:51 crc kubenswrapper[4624]: I1008 14:26:51.558141 4624 scope.go:117] "RemoveContainer" containerID="98622fa4911dfcd24111a80a42c721118846f07075488456df8d2c2d7d2cebc6" Oct 08 14:26:51 crc kubenswrapper[4624]: E1008 14:26:51.559778 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98622fa4911dfcd24111a80a42c721118846f07075488456df8d2c2d7d2cebc6\": container with ID starting with 98622fa4911dfcd24111a80a42c721118846f07075488456df8d2c2d7d2cebc6 not found: ID does not exist" containerID="98622fa4911dfcd24111a80a42c721118846f07075488456df8d2c2d7d2cebc6" Oct 08 14:26:51 crc kubenswrapper[4624]: I1008 14:26:51.559846 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98622fa4911dfcd24111a80a42c721118846f07075488456df8d2c2d7d2cebc6"} err="failed to get container status \"98622fa4911dfcd24111a80a42c721118846f07075488456df8d2c2d7d2cebc6\": rpc error: code = NotFound desc = could not find container \"98622fa4911dfcd24111a80a42c721118846f07075488456df8d2c2d7d2cebc6\": container with ID starting with 98622fa4911dfcd24111a80a42c721118846f07075488456df8d2c2d7d2cebc6 not found: ID does not exist" Oct 08 14:26:51 crc kubenswrapper[4624]: I1008 14:26:51.559895 4624 scope.go:117] "RemoveContainer" containerID="630bada73773c1413064690c957ec3da7f39d7decbfcbd904a5ea3a9a6a5b2fa" Oct 08 14:26:51 crc kubenswrapper[4624]: E1008 14:26:51.560219 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"630bada73773c1413064690c957ec3da7f39d7decbfcbd904a5ea3a9a6a5b2fa\": container with ID starting with 630bada73773c1413064690c957ec3da7f39d7decbfcbd904a5ea3a9a6a5b2fa not found: ID does not exist" containerID="630bada73773c1413064690c957ec3da7f39d7decbfcbd904a5ea3a9a6a5b2fa" Oct 08 14:26:51 crc kubenswrapper[4624]: I1008 14:26:51.560248 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"630bada73773c1413064690c957ec3da7f39d7decbfcbd904a5ea3a9a6a5b2fa"} err="failed to get container status \"630bada73773c1413064690c957ec3da7f39d7decbfcbd904a5ea3a9a6a5b2fa\": rpc error: code = NotFound desc = could not find container \"630bada73773c1413064690c957ec3da7f39d7decbfcbd904a5ea3a9a6a5b2fa\": container with ID starting with 630bada73773c1413064690c957ec3da7f39d7decbfcbd904a5ea3a9a6a5b2fa not found: ID does not exist" Oct 08 14:26:51 crc kubenswrapper[4624]: I1008 14:26:51.560269 4624 scope.go:117] "RemoveContainer" containerID="dec605386a48556e6a1e10c2873b52ce22193e47b7a5df18b0abe60e0c435b3a" Oct 08 14:26:51 crc kubenswrapper[4624]: E1008 14:26:51.560576 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dec605386a48556e6a1e10c2873b52ce22193e47b7a5df18b0abe60e0c435b3a\": container with ID starting with dec605386a48556e6a1e10c2873b52ce22193e47b7a5df18b0abe60e0c435b3a not found: ID does not exist" containerID="dec605386a48556e6a1e10c2873b52ce22193e47b7a5df18b0abe60e0c435b3a" Oct 08 14:26:51 crc kubenswrapper[4624]: I1008 14:26:51.560616 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dec605386a48556e6a1e10c2873b52ce22193e47b7a5df18b0abe60e0c435b3a"} err="failed to get container status \"dec605386a48556e6a1e10c2873b52ce22193e47b7a5df18b0abe60e0c435b3a\": rpc error: code = NotFound desc = could not find container \"dec605386a48556e6a1e10c2873b52ce22193e47b7a5df18b0abe60e0c435b3a\": container with ID starting with dec605386a48556e6a1e10c2873b52ce22193e47b7a5df18b0abe60e0c435b3a not found: ID does not exist" Oct 08 14:26:53 crc kubenswrapper[4624]: I1008 14:26:53.472104 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e17b3f85-b77b-4639-9993-66db89499fcf" path="/var/lib/kubelet/pods/e17b3f85-b77b-4639-9993-66db89499fcf/volumes" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.445159 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" podUID="03313d2c-4c05-4a55-a595-cf633b935c29" containerName="oauth-openshift" containerID="cri-o://f8af31dd925ac51c15f3afa768863811cdb7f74eeeabc95479ee817003853517" gracePeriod=15 Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.592644 4624 generic.go:334] "Generic (PLEG): container finished" podID="03313d2c-4c05-4a55-a595-cf633b935c29" containerID="f8af31dd925ac51c15f3afa768863811cdb7f74eeeabc95479ee817003853517" exitCode=0 Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.592684 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" event={"ID":"03313d2c-4c05-4a55-a595-cf633b935c29","Type":"ContainerDied","Data":"f8af31dd925ac51c15f3afa768863811cdb7f74eeeabc95479ee817003853517"} Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.776646 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.812660 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-74fd85d944-p6dsv"] Oct 08 14:27:11 crc kubenswrapper[4624]: E1008 14:27:11.812905 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17b3f85-b77b-4639-9993-66db89499fcf" containerName="registry-server" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.812923 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17b3f85-b77b-4639-9993-66db89499fcf" containerName="registry-server" Oct 08 14:27:11 crc kubenswrapper[4624]: E1008 14:27:11.812938 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd6bd92-2cce-46a2-881d-57e97e9b00bc" containerName="extract-utilities" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.812945 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd6bd92-2cce-46a2-881d-57e97e9b00bc" containerName="extract-utilities" Oct 08 14:27:11 crc kubenswrapper[4624]: E1008 14:27:11.812954 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17b3f85-b77b-4639-9993-66db89499fcf" containerName="extract-utilities" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.812961 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17b3f85-b77b-4639-9993-66db89499fcf" containerName="extract-utilities" Oct 08 14:27:11 crc kubenswrapper[4624]: E1008 14:27:11.812977 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8da3440-b5d4-46cc-a1ea-d3e4b7f03269" containerName="extract-utilities" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.812985 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8da3440-b5d4-46cc-a1ea-d3e4b7f03269" containerName="extract-utilities" Oct 08 14:27:11 crc kubenswrapper[4624]: E1008 14:27:11.812994 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="273358a6-107e-4e31-aaff-1f825924ef2d" containerName="extract-content" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.813001 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="273358a6-107e-4e31-aaff-1f825924ef2d" containerName="extract-content" Oct 08 14:27:11 crc kubenswrapper[4624]: E1008 14:27:11.813009 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd6bd92-2cce-46a2-881d-57e97e9b00bc" containerName="registry-server" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.813016 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd6bd92-2cce-46a2-881d-57e97e9b00bc" containerName="registry-server" Oct 08 14:27:11 crc kubenswrapper[4624]: E1008 14:27:11.813026 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17b3f85-b77b-4639-9993-66db89499fcf" containerName="extract-content" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.813033 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17b3f85-b77b-4639-9993-66db89499fcf" containerName="extract-content" Oct 08 14:27:11 crc kubenswrapper[4624]: E1008 14:27:11.813091 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac5549a-aa6e-4136-ba8f-72ea118d92e9" containerName="pruner" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.813100 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac5549a-aa6e-4136-ba8f-72ea118d92e9" containerName="pruner" Oct 08 14:27:11 crc kubenswrapper[4624]: E1008 14:27:11.813112 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="273358a6-107e-4e31-aaff-1f825924ef2d" containerName="extract-utilities" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.813120 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="273358a6-107e-4e31-aaff-1f825924ef2d" containerName="extract-utilities" Oct 08 14:27:11 crc kubenswrapper[4624]: E1008 14:27:11.813168 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32878d45-6648-45f2-a1ff-773403c738ab" containerName="pruner" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.813177 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="32878d45-6648-45f2-a1ff-773403c738ab" containerName="pruner" Oct 08 14:27:11 crc kubenswrapper[4624]: E1008 14:27:11.813185 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="273358a6-107e-4e31-aaff-1f825924ef2d" containerName="registry-server" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.813192 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="273358a6-107e-4e31-aaff-1f825924ef2d" containerName="registry-server" Oct 08 14:27:11 crc kubenswrapper[4624]: E1008 14:27:11.813207 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03313d2c-4c05-4a55-a595-cf633b935c29" containerName="oauth-openshift" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.813248 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="03313d2c-4c05-4a55-a595-cf633b935c29" containerName="oauth-openshift" Oct 08 14:27:11 crc kubenswrapper[4624]: E1008 14:27:11.813260 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8da3440-b5d4-46cc-a1ea-d3e4b7f03269" containerName="registry-server" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.813268 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8da3440-b5d4-46cc-a1ea-d3e4b7f03269" containerName="registry-server" Oct 08 14:27:11 crc kubenswrapper[4624]: E1008 14:27:11.813279 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8da3440-b5d4-46cc-a1ea-d3e4b7f03269" containerName="extract-content" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.813288 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8da3440-b5d4-46cc-a1ea-d3e4b7f03269" containerName="extract-content" Oct 08 14:27:11 crc kubenswrapper[4624]: E1008 14:27:11.813353 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd6bd92-2cce-46a2-881d-57e97e9b00bc" containerName="extract-content" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.813363 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd6bd92-2cce-46a2-881d-57e97e9b00bc" containerName="extract-content" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.813607 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd6bd92-2cce-46a2-881d-57e97e9b00bc" containerName="registry-server" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.813650 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="273358a6-107e-4e31-aaff-1f825924ef2d" containerName="registry-server" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.813663 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="32878d45-6648-45f2-a1ff-773403c738ab" containerName="pruner" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.813673 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="03313d2c-4c05-4a55-a595-cf633b935c29" containerName="oauth-openshift" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.813682 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac5549a-aa6e-4136-ba8f-72ea118d92e9" containerName="pruner" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.813692 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8da3440-b5d4-46cc-a1ea-d3e4b7f03269" containerName="registry-server" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.813700 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17b3f85-b77b-4639-9993-66db89499fcf" containerName="registry-server" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.814226 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.817575 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-74fd85d944-p6dsv"] Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.916087 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-cliconfig\") pod \"03313d2c-4c05-4a55-a595-cf633b935c29\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.916448 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-template-error\") pod \"03313d2c-4c05-4a55-a595-cf633b935c29\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.916476 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-audit-policies\") pod \"03313d2c-4c05-4a55-a595-cf633b935c29\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.916516 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-service-ca\") pod \"03313d2c-4c05-4a55-a595-cf633b935c29\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.916562 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-serving-cert\") pod \"03313d2c-4c05-4a55-a595-cf633b935c29\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.916588 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-session\") pod \"03313d2c-4c05-4a55-a595-cf633b935c29\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.916623 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-router-certs\") pod \"03313d2c-4c05-4a55-a595-cf633b935c29\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.916875 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "03313d2c-4c05-4a55-a595-cf633b935c29" (UID: "03313d2c-4c05-4a55-a595-cf633b935c29"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.917534 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-template-provider-selection\") pod \"03313d2c-4c05-4a55-a595-cf633b935c29\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.917584 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-template-login\") pod \"03313d2c-4c05-4a55-a595-cf633b935c29\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.917614 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-trusted-ca-bundle\") pod \"03313d2c-4c05-4a55-a595-cf633b935c29\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.917669 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vggrv\" (UniqueName: \"kubernetes.io/projected/03313d2c-4c05-4a55-a595-cf633b935c29-kube-api-access-vggrv\") pod \"03313d2c-4c05-4a55-a595-cf633b935c29\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.917693 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-idp-0-file-data\") pod \"03313d2c-4c05-4a55-a595-cf633b935c29\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.917721 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03313d2c-4c05-4a55-a595-cf633b935c29-audit-dir\") pod \"03313d2c-4c05-4a55-a595-cf633b935c29\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.917746 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-ocp-branding-template\") pod \"03313d2c-4c05-4a55-a595-cf633b935c29\" (UID: \"03313d2c-4c05-4a55-a595-cf633b935c29\") " Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.917703 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "03313d2c-4c05-4a55-a595-cf633b935c29" (UID: "03313d2c-4c05-4a55-a595-cf633b935c29"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.917913 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.917910 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "03313d2c-4c05-4a55-a595-cf633b935c29" (UID: "03313d2c-4c05-4a55-a595-cf633b935c29"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.917951 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-user-template-login\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.917971 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03313d2c-4c05-4a55-a595-cf633b935c29-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "03313d2c-4c05-4a55-a595-cf633b935c29" (UID: "03313d2c-4c05-4a55-a595-cf633b935c29"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.917981 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.918010 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.918050 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.918093 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-user-template-error\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.918127 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.918145 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phttm\" (UniqueName: \"kubernetes.io/projected/819938a9-95b4-4925-84a5-6039470ee46b-kube-api-access-phttm\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.918161 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/819938a9-95b4-4925-84a5-6039470ee46b-audit-policies\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.918181 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-session\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.918196 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.918231 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.918250 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.918280 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/819938a9-95b4-4925-84a5-6039470ee46b-audit-dir\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.918318 4624 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03313d2c-4c05-4a55-a595-cf633b935c29-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.918328 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.918337 4624 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.918346 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.920377 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "03313d2c-4c05-4a55-a595-cf633b935c29" (UID: "03313d2c-4c05-4a55-a595-cf633b935c29"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.922753 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03313d2c-4c05-4a55-a595-cf633b935c29-kube-api-access-vggrv" (OuterVolumeSpecName: "kube-api-access-vggrv") pod "03313d2c-4c05-4a55-a595-cf633b935c29" (UID: "03313d2c-4c05-4a55-a595-cf633b935c29"). InnerVolumeSpecName "kube-api-access-vggrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.922844 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "03313d2c-4c05-4a55-a595-cf633b935c29" (UID: "03313d2c-4c05-4a55-a595-cf633b935c29"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.923117 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "03313d2c-4c05-4a55-a595-cf633b935c29" (UID: "03313d2c-4c05-4a55-a595-cf633b935c29"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.932728 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "03313d2c-4c05-4a55-a595-cf633b935c29" (UID: "03313d2c-4c05-4a55-a595-cf633b935c29"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.933050 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "03313d2c-4c05-4a55-a595-cf633b935c29" (UID: "03313d2c-4c05-4a55-a595-cf633b935c29"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.933254 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "03313d2c-4c05-4a55-a595-cf633b935c29" (UID: "03313d2c-4c05-4a55-a595-cf633b935c29"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.933389 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "03313d2c-4c05-4a55-a595-cf633b935c29" (UID: "03313d2c-4c05-4a55-a595-cf633b935c29"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.933426 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "03313d2c-4c05-4a55-a595-cf633b935c29" (UID: "03313d2c-4c05-4a55-a595-cf633b935c29"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:27:11 crc kubenswrapper[4624]: I1008 14:27:11.933544 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "03313d2c-4c05-4a55-a595-cf633b935c29" (UID: "03313d2c-4c05-4a55-a595-cf633b935c29"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.018975 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019046 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-user-template-login\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019074 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019099 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019138 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019176 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-user-template-error\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019204 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019227 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phttm\" (UniqueName: \"kubernetes.io/projected/819938a9-95b4-4925-84a5-6039470ee46b-kube-api-access-phttm\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019253 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/819938a9-95b4-4925-84a5-6039470ee46b-audit-policies\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019277 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-session\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019300 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019323 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019347 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019380 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/819938a9-95b4-4925-84a5-6039470ee46b-audit-dir\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019441 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019456 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019471 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019486 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vggrv\" (UniqueName: \"kubernetes.io/projected/03313d2c-4c05-4a55-a595-cf633b935c29-kube-api-access-vggrv\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019498 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019510 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019521 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019533 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019662 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019681 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03313d2c-4c05-4a55-a595-cf633b935c29-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.019718 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/819938a9-95b4-4925-84a5-6039470ee46b-audit-dir\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.020759 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.020766 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.020766 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/819938a9-95b4-4925-84a5-6039470ee46b-audit-policies\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.021501 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.022686 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-user-template-login\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.023325 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.023556 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-session\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.023882 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.023957 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.025451 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-user-template-error\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.025740 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.026523 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/819938a9-95b4-4925-84a5-6039470ee46b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.036479 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phttm\" (UniqueName: \"kubernetes.io/projected/819938a9-95b4-4925-84a5-6039470ee46b-kube-api-access-phttm\") pod \"oauth-openshift-74fd85d944-p6dsv\" (UID: \"819938a9-95b4-4925-84a5-6039470ee46b\") " pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.142247 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.514245 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-74fd85d944-p6dsv"] Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.598862 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" event={"ID":"03313d2c-4c05-4a55-a595-cf633b935c29","Type":"ContainerDied","Data":"6671e382f0c4638250ee59f7118f3be4da783463a2b94fd589aad111ccad249b"} Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.599132 4624 scope.go:117] "RemoveContainer" containerID="f8af31dd925ac51c15f3afa768863811cdb7f74eeeabc95479ee817003853517" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.599090 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b8zl8" Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.601089 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" event={"ID":"819938a9-95b4-4925-84a5-6039470ee46b","Type":"ContainerStarted","Data":"5e454c95334108c99d202249fe37a6743f4bb4b080fee611447e4aa920587953"} Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.630782 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b8zl8"] Oct 08 14:27:12 crc kubenswrapper[4624]: I1008 14:27:12.634242 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b8zl8"] Oct 08 14:27:13 crc kubenswrapper[4624]: I1008 14:27:13.472448 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03313d2c-4c05-4a55-a595-cf633b935c29" path="/var/lib/kubelet/pods/03313d2c-4c05-4a55-a595-cf633b935c29/volumes" Oct 08 14:27:13 crc kubenswrapper[4624]: I1008 14:27:13.607368 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" event={"ID":"819938a9-95b4-4925-84a5-6039470ee46b","Type":"ContainerStarted","Data":"b7ee8bca8fd5172e618f8ab91d483da02458b03998f44f1469e69d46120a432d"} Oct 08 14:27:13 crc kubenswrapper[4624]: I1008 14:27:13.608287 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:13 crc kubenswrapper[4624]: I1008 14:27:13.612568 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" Oct 08 14:27:13 crc kubenswrapper[4624]: I1008 14:27:13.626939 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-74fd85d944-p6dsv" podStartSLOduration=27.626921351 podStartE2EDuration="27.626921351s" podCreationTimestamp="2025-10-08 14:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:27:13.624243513 +0000 UTC m=+258.775178600" watchObservedRunningTime="2025-10-08 14:27:13.626921351 +0000 UTC m=+258.777856428" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.047916 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wt4rs"] Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.051021 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wt4rs" podUID="a25349b6-d167-4846-884c-9c057b5c6491" containerName="registry-server" containerID="cri-o://857becc4ab790c5f1a90b273946c23a88c930d2b2e9643589dc4452c0205f8c7" gracePeriod=30 Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.064865 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mnjz6"] Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.065153 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mnjz6" podUID="78f43ab9-6c12-4d67-9c33-b186ebcef93c" containerName="registry-server" containerID="cri-o://5cb57667a0c497f3babb4f09d750db93dab0d1fefc7355aec543824d85d9329a" gracePeriod=30 Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.087448 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x6fbb"] Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.087732 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" podUID="ae5a1b04-50e9-45ad-abb5-4a71d58f2a31" containerName="marketplace-operator" containerID="cri-o://fb3f7b70848b0f580520d456475cf7cd9559a0de6dce62cd54e624f314ad0444" gracePeriod=30 Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.098347 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5npx6"] Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.098644 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5npx6" podUID="76cec727-339f-4269-b0ee-d4aec3b0d6e3" containerName="registry-server" containerID="cri-o://99efd5989e34a9dae3b4f6772f064452cf3a28e5cc8f639f66944e52f5c2f7c7" gracePeriod=30 Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.111048 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7mwws"] Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.111365 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7mwws" podUID="e21258b9-106f-4b15-aa2c-7e65598341c2" containerName="registry-server" containerID="cri-o://4250685d2388500ae6c7cb5a736021d82d74c900b0e112026edaf6d10c99182d" gracePeriod=30 Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.128311 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cwbr6"] Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.132852 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cwbr6" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.141874 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cwbr6"] Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.235654 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jltvr\" (UniqueName: \"kubernetes.io/projected/348b94bb-ae57-4f52-8592-53abc49b97d0-kube-api-access-jltvr\") pod \"marketplace-operator-79b997595-cwbr6\" (UID: \"348b94bb-ae57-4f52-8592-53abc49b97d0\") " pod="openshift-marketplace/marketplace-operator-79b997595-cwbr6" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.236224 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/348b94bb-ae57-4f52-8592-53abc49b97d0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cwbr6\" (UID: \"348b94bb-ae57-4f52-8592-53abc49b97d0\") " pod="openshift-marketplace/marketplace-operator-79b997595-cwbr6" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.236260 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/348b94bb-ae57-4f52-8592-53abc49b97d0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cwbr6\" (UID: \"348b94bb-ae57-4f52-8592-53abc49b97d0\") " pod="openshift-marketplace/marketplace-operator-79b997595-cwbr6" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.338169 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jltvr\" (UniqueName: \"kubernetes.io/projected/348b94bb-ae57-4f52-8592-53abc49b97d0-kube-api-access-jltvr\") pod \"marketplace-operator-79b997595-cwbr6\" (UID: \"348b94bb-ae57-4f52-8592-53abc49b97d0\") " pod="openshift-marketplace/marketplace-operator-79b997595-cwbr6" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.338254 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/348b94bb-ae57-4f52-8592-53abc49b97d0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cwbr6\" (UID: \"348b94bb-ae57-4f52-8592-53abc49b97d0\") " pod="openshift-marketplace/marketplace-operator-79b997595-cwbr6" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.338317 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/348b94bb-ae57-4f52-8592-53abc49b97d0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cwbr6\" (UID: \"348b94bb-ae57-4f52-8592-53abc49b97d0\") " pod="openshift-marketplace/marketplace-operator-79b997595-cwbr6" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.340501 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/348b94bb-ae57-4f52-8592-53abc49b97d0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cwbr6\" (UID: \"348b94bb-ae57-4f52-8592-53abc49b97d0\") " pod="openshift-marketplace/marketplace-operator-79b997595-cwbr6" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.350521 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/348b94bb-ae57-4f52-8592-53abc49b97d0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cwbr6\" (UID: \"348b94bb-ae57-4f52-8592-53abc49b97d0\") " pod="openshift-marketplace/marketplace-operator-79b997595-cwbr6" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.363506 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jltvr\" (UniqueName: \"kubernetes.io/projected/348b94bb-ae57-4f52-8592-53abc49b97d0-kube-api-access-jltvr\") pod \"marketplace-operator-79b997595-cwbr6\" (UID: \"348b94bb-ae57-4f52-8592-53abc49b97d0\") " pod="openshift-marketplace/marketplace-operator-79b997595-cwbr6" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.410319 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cwbr6" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.478314 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mnjz6" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.546987 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78f43ab9-6c12-4d67-9c33-b186ebcef93c-catalog-content\") pod \"78f43ab9-6c12-4d67-9c33-b186ebcef93c\" (UID: \"78f43ab9-6c12-4d67-9c33-b186ebcef93c\") " Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.547070 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq5rs\" (UniqueName: \"kubernetes.io/projected/78f43ab9-6c12-4d67-9c33-b186ebcef93c-kube-api-access-tq5rs\") pod \"78f43ab9-6c12-4d67-9c33-b186ebcef93c\" (UID: \"78f43ab9-6c12-4d67-9c33-b186ebcef93c\") " Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.547111 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78f43ab9-6c12-4d67-9c33-b186ebcef93c-utilities\") pod \"78f43ab9-6c12-4d67-9c33-b186ebcef93c\" (UID: \"78f43ab9-6c12-4d67-9c33-b186ebcef93c\") " Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.549554 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78f43ab9-6c12-4d67-9c33-b186ebcef93c-utilities" (OuterVolumeSpecName: "utilities") pod "78f43ab9-6c12-4d67-9c33-b186ebcef93c" (UID: "78f43ab9-6c12-4d67-9c33-b186ebcef93c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.553461 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78f43ab9-6c12-4d67-9c33-b186ebcef93c-kube-api-access-tq5rs" (OuterVolumeSpecName: "kube-api-access-tq5rs") pod "78f43ab9-6c12-4d67-9c33-b186ebcef93c" (UID: "78f43ab9-6c12-4d67-9c33-b186ebcef93c"). InnerVolumeSpecName "kube-api-access-tq5rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.648659 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq5rs\" (UniqueName: \"kubernetes.io/projected/78f43ab9-6c12-4d67-9c33-b186ebcef93c-kube-api-access-tq5rs\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.648961 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78f43ab9-6c12-4d67-9c33-b186ebcef93c-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.654132 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78f43ab9-6c12-4d67-9c33-b186ebcef93c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78f43ab9-6c12-4d67-9c33-b186ebcef93c" (UID: "78f43ab9-6c12-4d67-9c33-b186ebcef93c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.682537 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wt4rs" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.690819 4624 generic.go:334] "Generic (PLEG): container finished" podID="76cec727-339f-4269-b0ee-d4aec3b0d6e3" containerID="99efd5989e34a9dae3b4f6772f064452cf3a28e5cc8f639f66944e52f5c2f7c7" exitCode=0 Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.690857 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5npx6" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.690913 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5npx6" event={"ID":"76cec727-339f-4269-b0ee-d4aec3b0d6e3","Type":"ContainerDied","Data":"99efd5989e34a9dae3b4f6772f064452cf3a28e5cc8f639f66944e52f5c2f7c7"} Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.690935 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5npx6" event={"ID":"76cec727-339f-4269-b0ee-d4aec3b0d6e3","Type":"ContainerDied","Data":"e9a851a9fca22441aae4e7f54eef386eab5bdad7ad3f3f7c004943a681e5ccea"} Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.690950 4624 scope.go:117] "RemoveContainer" containerID="99efd5989e34a9dae3b4f6772f064452cf3a28e5cc8f639f66944e52f5c2f7c7" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.703680 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.711340 4624 generic.go:334] "Generic (PLEG): container finished" podID="a25349b6-d167-4846-884c-9c057b5c6491" containerID="857becc4ab790c5f1a90b273946c23a88c930d2b2e9643589dc4452c0205f8c7" exitCode=0 Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.711423 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wt4rs" event={"ID":"a25349b6-d167-4846-884c-9c057b5c6491","Type":"ContainerDied","Data":"857becc4ab790c5f1a90b273946c23a88c930d2b2e9643589dc4452c0205f8c7"} Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.711447 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wt4rs" event={"ID":"a25349b6-d167-4846-884c-9c057b5c6491","Type":"ContainerDied","Data":"1eef3d5baac2e952a81d4f42d3b1384d4789ffe4204dbb770df8f75238c8c150"} Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.711522 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wt4rs" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.716764 4624 generic.go:334] "Generic (PLEG): container finished" podID="78f43ab9-6c12-4d67-9c33-b186ebcef93c" containerID="5cb57667a0c497f3babb4f09d750db93dab0d1fefc7355aec543824d85d9329a" exitCode=0 Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.716821 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mnjz6" event={"ID":"78f43ab9-6c12-4d67-9c33-b186ebcef93c","Type":"ContainerDied","Data":"5cb57667a0c497f3babb4f09d750db93dab0d1fefc7355aec543824d85d9329a"} Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.716848 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mnjz6" event={"ID":"78f43ab9-6c12-4d67-9c33-b186ebcef93c","Type":"ContainerDied","Data":"4e64a5963ebbd37aee2b296ccebce9a68127691876dbba730ab62b0532a8bcf4"} Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.716904 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mnjz6" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.726268 4624 scope.go:117] "RemoveContainer" containerID="89713d2099d025df5f4ce48c27336a77c88579334df72ef7397684e9651e2f5b" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.748143 4624 generic.go:334] "Generic (PLEG): container finished" podID="e21258b9-106f-4b15-aa2c-7e65598341c2" containerID="4250685d2388500ae6c7cb5a736021d82d74c900b0e112026edaf6d10c99182d" exitCode=0 Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.748219 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mwws" event={"ID":"e21258b9-106f-4b15-aa2c-7e65598341c2","Type":"ContainerDied","Data":"4250685d2388500ae6c7cb5a736021d82d74c900b0e112026edaf6d10c99182d"} Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.749404 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25349b6-d167-4846-884c-9c057b5c6491-catalog-content\") pod \"a25349b6-d167-4846-884c-9c057b5c6491\" (UID: \"a25349b6-d167-4846-884c-9c057b5c6491\") " Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.749503 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25349b6-d167-4846-884c-9c057b5c6491-utilities\") pod \"a25349b6-d167-4846-884c-9c057b5c6491\" (UID: \"a25349b6-d167-4846-884c-9c057b5c6491\") " Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.749551 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79htk\" (UniqueName: \"kubernetes.io/projected/a25349b6-d167-4846-884c-9c057b5c6491-kube-api-access-79htk\") pod \"a25349b6-d167-4846-884c-9c057b5c6491\" (UID: \"a25349b6-d167-4846-884c-9c057b5c6491\") " Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.749595 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76cec727-339f-4269-b0ee-d4aec3b0d6e3-utilities\") pod \"76cec727-339f-4269-b0ee-d4aec3b0d6e3\" (UID: \"76cec727-339f-4269-b0ee-d4aec3b0d6e3\") " Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.749669 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76cec727-339f-4269-b0ee-d4aec3b0d6e3-catalog-content\") pod \"76cec727-339f-4269-b0ee-d4aec3b0d6e3\" (UID: \"76cec727-339f-4269-b0ee-d4aec3b0d6e3\") " Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.749689 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtw52\" (UniqueName: \"kubernetes.io/projected/76cec727-339f-4269-b0ee-d4aec3b0d6e3-kube-api-access-dtw52\") pod \"76cec727-339f-4269-b0ee-d4aec3b0d6e3\" (UID: \"76cec727-339f-4269-b0ee-d4aec3b0d6e3\") " Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.749986 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78f43ab9-6c12-4d67-9c33-b186ebcef93c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.752433 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25349b6-d167-4846-884c-9c057b5c6491-utilities" (OuterVolumeSpecName: "utilities") pod "a25349b6-d167-4846-884c-9c057b5c6491" (UID: "a25349b6-d167-4846-884c-9c057b5c6491"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.755199 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76cec727-339f-4269-b0ee-d4aec3b0d6e3-utilities" (OuterVolumeSpecName: "utilities") pod "76cec727-339f-4269-b0ee-d4aec3b0d6e3" (UID: "76cec727-339f-4269-b0ee-d4aec3b0d6e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.755542 4624 generic.go:334] "Generic (PLEG): container finished" podID="ae5a1b04-50e9-45ad-abb5-4a71d58f2a31" containerID="fb3f7b70848b0f580520d456475cf7cd9559a0de6dce62cd54e624f314ad0444" exitCode=0 Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.755620 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" event={"ID":"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31","Type":"ContainerDied","Data":"fb3f7b70848b0f580520d456475cf7cd9559a0de6dce62cd54e624f314ad0444"} Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.755677 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" event={"ID":"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31","Type":"ContainerDied","Data":"7310196460ddaeafe9680b4e823e427d26b9408d171ecba97ebc415b7d1cd556"} Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.755903 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x6fbb" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.758213 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76cec727-339f-4269-b0ee-d4aec3b0d6e3-kube-api-access-dtw52" (OuterVolumeSpecName: "kube-api-access-dtw52") pod "76cec727-339f-4269-b0ee-d4aec3b0d6e3" (UID: "76cec727-339f-4269-b0ee-d4aec3b0d6e3"). InnerVolumeSpecName "kube-api-access-dtw52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.759720 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mwws" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.775169 4624 scope.go:117] "RemoveContainer" containerID="43108ef7aef74ae691aad9aea7b72a7bb1b6cab74956679f58e0f55c63303309" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.796171 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25349b6-d167-4846-884c-9c057b5c6491-kube-api-access-79htk" (OuterVolumeSpecName: "kube-api-access-79htk") pod "a25349b6-d167-4846-884c-9c057b5c6491" (UID: "a25349b6-d167-4846-884c-9c057b5c6491"). InnerVolumeSpecName "kube-api-access-79htk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.797213 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76cec727-339f-4269-b0ee-d4aec3b0d6e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76cec727-339f-4269-b0ee-d4aec3b0d6e3" (UID: "76cec727-339f-4269-b0ee-d4aec3b0d6e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.809057 4624 scope.go:117] "RemoveContainer" containerID="99efd5989e34a9dae3b4f6772f064452cf3a28e5cc8f639f66944e52f5c2f7c7" Oct 08 14:27:30 crc kubenswrapper[4624]: E1008 14:27:30.809524 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99efd5989e34a9dae3b4f6772f064452cf3a28e5cc8f639f66944e52f5c2f7c7\": container with ID starting with 99efd5989e34a9dae3b4f6772f064452cf3a28e5cc8f639f66944e52f5c2f7c7 not found: ID does not exist" containerID="99efd5989e34a9dae3b4f6772f064452cf3a28e5cc8f639f66944e52f5c2f7c7" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.809561 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99efd5989e34a9dae3b4f6772f064452cf3a28e5cc8f639f66944e52f5c2f7c7"} err="failed to get container status \"99efd5989e34a9dae3b4f6772f064452cf3a28e5cc8f639f66944e52f5c2f7c7\": rpc error: code = NotFound desc = could not find container \"99efd5989e34a9dae3b4f6772f064452cf3a28e5cc8f639f66944e52f5c2f7c7\": container with ID starting with 99efd5989e34a9dae3b4f6772f064452cf3a28e5cc8f639f66944e52f5c2f7c7 not found: ID does not exist" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.809588 4624 scope.go:117] "RemoveContainer" containerID="89713d2099d025df5f4ce48c27336a77c88579334df72ef7397684e9651e2f5b" Oct 08 14:27:30 crc kubenswrapper[4624]: E1008 14:27:30.809928 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89713d2099d025df5f4ce48c27336a77c88579334df72ef7397684e9651e2f5b\": container with ID starting with 89713d2099d025df5f4ce48c27336a77c88579334df72ef7397684e9651e2f5b not found: ID does not exist" containerID="89713d2099d025df5f4ce48c27336a77c88579334df72ef7397684e9651e2f5b" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.809996 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89713d2099d025df5f4ce48c27336a77c88579334df72ef7397684e9651e2f5b"} err="failed to get container status \"89713d2099d025df5f4ce48c27336a77c88579334df72ef7397684e9651e2f5b\": rpc error: code = NotFound desc = could not find container \"89713d2099d025df5f4ce48c27336a77c88579334df72ef7397684e9651e2f5b\": container with ID starting with 89713d2099d025df5f4ce48c27336a77c88579334df72ef7397684e9651e2f5b not found: ID does not exist" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.810021 4624 scope.go:117] "RemoveContainer" containerID="43108ef7aef74ae691aad9aea7b72a7bb1b6cab74956679f58e0f55c63303309" Oct 08 14:27:30 crc kubenswrapper[4624]: E1008 14:27:30.814867 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43108ef7aef74ae691aad9aea7b72a7bb1b6cab74956679f58e0f55c63303309\": container with ID starting with 43108ef7aef74ae691aad9aea7b72a7bb1b6cab74956679f58e0f55c63303309 not found: ID does not exist" containerID="43108ef7aef74ae691aad9aea7b72a7bb1b6cab74956679f58e0f55c63303309" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.814909 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43108ef7aef74ae691aad9aea7b72a7bb1b6cab74956679f58e0f55c63303309"} err="failed to get container status \"43108ef7aef74ae691aad9aea7b72a7bb1b6cab74956679f58e0f55c63303309\": rpc error: code = NotFound desc = could not find container \"43108ef7aef74ae691aad9aea7b72a7bb1b6cab74956679f58e0f55c63303309\": container with ID starting with 43108ef7aef74ae691aad9aea7b72a7bb1b6cab74956679f58e0f55c63303309 not found: ID does not exist" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.814934 4624 scope.go:117] "RemoveContainer" containerID="857becc4ab790c5f1a90b273946c23a88c930d2b2e9643589dc4452c0205f8c7" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.815980 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mnjz6"] Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.843437 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mnjz6"] Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.853484 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqf4z\" (UniqueName: \"kubernetes.io/projected/e21258b9-106f-4b15-aa2c-7e65598341c2-kube-api-access-fqf4z\") pod \"e21258b9-106f-4b15-aa2c-7e65598341c2\" (UID: \"e21258b9-106f-4b15-aa2c-7e65598341c2\") " Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.853557 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrfqg\" (UniqueName: \"kubernetes.io/projected/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31-kube-api-access-vrfqg\") pod \"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31\" (UID: \"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31\") " Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.853615 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e21258b9-106f-4b15-aa2c-7e65598341c2-utilities\") pod \"e21258b9-106f-4b15-aa2c-7e65598341c2\" (UID: \"e21258b9-106f-4b15-aa2c-7e65598341c2\") " Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.853696 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e21258b9-106f-4b15-aa2c-7e65598341c2-catalog-content\") pod \"e21258b9-106f-4b15-aa2c-7e65598341c2\" (UID: \"e21258b9-106f-4b15-aa2c-7e65598341c2\") " Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.853822 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31-marketplace-operator-metrics\") pod \"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31\" (UID: \"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31\") " Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.853847 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31-marketplace-trusted-ca\") pod \"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31\" (UID: \"ae5a1b04-50e9-45ad-abb5-4a71d58f2a31\") " Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.854219 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76cec727-339f-4269-b0ee-d4aec3b0d6e3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.854238 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtw52\" (UniqueName: \"kubernetes.io/projected/76cec727-339f-4269-b0ee-d4aec3b0d6e3-kube-api-access-dtw52\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.854253 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25349b6-d167-4846-884c-9c057b5c6491-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.854268 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79htk\" (UniqueName: \"kubernetes.io/projected/a25349b6-d167-4846-884c-9c057b5c6491-kube-api-access-79htk\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.854279 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76cec727-339f-4269-b0ee-d4aec3b0d6e3-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.855038 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "ae5a1b04-50e9-45ad-abb5-4a71d58f2a31" (UID: "ae5a1b04-50e9-45ad-abb5-4a71d58f2a31"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.857019 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e21258b9-106f-4b15-aa2c-7e65598341c2-utilities" (OuterVolumeSpecName: "utilities") pod "e21258b9-106f-4b15-aa2c-7e65598341c2" (UID: "e21258b9-106f-4b15-aa2c-7e65598341c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.866887 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25349b6-d167-4846-884c-9c057b5c6491-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a25349b6-d167-4846-884c-9c057b5c6491" (UID: "a25349b6-d167-4846-884c-9c057b5c6491"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.880697 4624 scope.go:117] "RemoveContainer" containerID="f3069e8e9af689137eab6f5b86ee3b5dcbc7ac3078096d37cc6b681f33f4b563" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.896509 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e21258b9-106f-4b15-aa2c-7e65598341c2-kube-api-access-fqf4z" (OuterVolumeSpecName: "kube-api-access-fqf4z") pod "e21258b9-106f-4b15-aa2c-7e65598341c2" (UID: "e21258b9-106f-4b15-aa2c-7e65598341c2"). InnerVolumeSpecName "kube-api-access-fqf4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.903972 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31-kube-api-access-vrfqg" (OuterVolumeSpecName: "kube-api-access-vrfqg") pod "ae5a1b04-50e9-45ad-abb5-4a71d58f2a31" (UID: "ae5a1b04-50e9-45ad-abb5-4a71d58f2a31"). InnerVolumeSpecName "kube-api-access-vrfqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.904428 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "ae5a1b04-50e9-45ad-abb5-4a71d58f2a31" (UID: "ae5a1b04-50e9-45ad-abb5-4a71d58f2a31"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.920159 4624 scope.go:117] "RemoveContainer" containerID="d6a0b45644eda26abf376e436789f0438402b423bd1c28b1cba7cfa7c6ca2257" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.941920 4624 scope.go:117] "RemoveContainer" containerID="857becc4ab790c5f1a90b273946c23a88c930d2b2e9643589dc4452c0205f8c7" Oct 08 14:27:30 crc kubenswrapper[4624]: E1008 14:27:30.954554 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"857becc4ab790c5f1a90b273946c23a88c930d2b2e9643589dc4452c0205f8c7\": container with ID starting with 857becc4ab790c5f1a90b273946c23a88c930d2b2e9643589dc4452c0205f8c7 not found: ID does not exist" containerID="857becc4ab790c5f1a90b273946c23a88c930d2b2e9643589dc4452c0205f8c7" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.954606 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"857becc4ab790c5f1a90b273946c23a88c930d2b2e9643589dc4452c0205f8c7"} err="failed to get container status \"857becc4ab790c5f1a90b273946c23a88c930d2b2e9643589dc4452c0205f8c7\": rpc error: code = NotFound desc = could not find container \"857becc4ab790c5f1a90b273946c23a88c930d2b2e9643589dc4452c0205f8c7\": container with ID starting with 857becc4ab790c5f1a90b273946c23a88c930d2b2e9643589dc4452c0205f8c7 not found: ID does not exist" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.954649 4624 scope.go:117] "RemoveContainer" containerID="f3069e8e9af689137eab6f5b86ee3b5dcbc7ac3078096d37cc6b681f33f4b563" Oct 08 14:27:30 crc kubenswrapper[4624]: E1008 14:27:30.955246 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3069e8e9af689137eab6f5b86ee3b5dcbc7ac3078096d37cc6b681f33f4b563\": container with ID starting with f3069e8e9af689137eab6f5b86ee3b5dcbc7ac3078096d37cc6b681f33f4b563 not found: ID does not exist" containerID="f3069e8e9af689137eab6f5b86ee3b5dcbc7ac3078096d37cc6b681f33f4b563" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.955378 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3069e8e9af689137eab6f5b86ee3b5dcbc7ac3078096d37cc6b681f33f4b563"} err="failed to get container status \"f3069e8e9af689137eab6f5b86ee3b5dcbc7ac3078096d37cc6b681f33f4b563\": rpc error: code = NotFound desc = could not find container \"f3069e8e9af689137eab6f5b86ee3b5dcbc7ac3078096d37cc6b681f33f4b563\": container with ID starting with f3069e8e9af689137eab6f5b86ee3b5dcbc7ac3078096d37cc6b681f33f4b563 not found: ID does not exist" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.955491 4624 scope.go:117] "RemoveContainer" containerID="d6a0b45644eda26abf376e436789f0438402b423bd1c28b1cba7cfa7c6ca2257" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.955728 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e21258b9-106f-4b15-aa2c-7e65598341c2-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.955760 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25349b6-d167-4846-884c-9c057b5c6491-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.955796 4624 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.955808 4624 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.955816 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqf4z\" (UniqueName: \"kubernetes.io/projected/e21258b9-106f-4b15-aa2c-7e65598341c2-kube-api-access-fqf4z\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.955825 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrfqg\" (UniqueName: \"kubernetes.io/projected/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31-kube-api-access-vrfqg\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:30 crc kubenswrapper[4624]: E1008 14:27:30.956245 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6a0b45644eda26abf376e436789f0438402b423bd1c28b1cba7cfa7c6ca2257\": container with ID starting with d6a0b45644eda26abf376e436789f0438402b423bd1c28b1cba7cfa7c6ca2257 not found: ID does not exist" containerID="d6a0b45644eda26abf376e436789f0438402b423bd1c28b1cba7cfa7c6ca2257" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.956341 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6a0b45644eda26abf376e436789f0438402b423bd1c28b1cba7cfa7c6ca2257"} err="failed to get container status \"d6a0b45644eda26abf376e436789f0438402b423bd1c28b1cba7cfa7c6ca2257\": rpc error: code = NotFound desc = could not find container \"d6a0b45644eda26abf376e436789f0438402b423bd1c28b1cba7cfa7c6ca2257\": container with ID starting with d6a0b45644eda26abf376e436789f0438402b423bd1c28b1cba7cfa7c6ca2257 not found: ID does not exist" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.956437 4624 scope.go:117] "RemoveContainer" containerID="5cb57667a0c497f3babb4f09d750db93dab0d1fefc7355aec543824d85d9329a" Oct 08 14:27:30 crc kubenswrapper[4624]: I1008 14:27:30.984793 4624 scope.go:117] "RemoveContainer" containerID="7f6422f13f60736704c9faa9db148924fbe216d9bd09ebbbe224b817830d6d20" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.015879 4624 scope.go:117] "RemoveContainer" containerID="134135c13aee222daf6649a7cec24fa2aacd2129947193dbe31112f637d6ee1a" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.022879 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cwbr6"] Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.044768 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wt4rs"] Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.050403 4624 scope.go:117] "RemoveContainer" containerID="5cb57667a0c497f3babb4f09d750db93dab0d1fefc7355aec543824d85d9329a" Oct 08 14:27:31 crc kubenswrapper[4624]: E1008 14:27:31.051451 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cb57667a0c497f3babb4f09d750db93dab0d1fefc7355aec543824d85d9329a\": container with ID starting with 5cb57667a0c497f3babb4f09d750db93dab0d1fefc7355aec543824d85d9329a not found: ID does not exist" containerID="5cb57667a0c497f3babb4f09d750db93dab0d1fefc7355aec543824d85d9329a" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.051563 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cb57667a0c497f3babb4f09d750db93dab0d1fefc7355aec543824d85d9329a"} err="failed to get container status \"5cb57667a0c497f3babb4f09d750db93dab0d1fefc7355aec543824d85d9329a\": rpc error: code = NotFound desc = could not find container \"5cb57667a0c497f3babb4f09d750db93dab0d1fefc7355aec543824d85d9329a\": container with ID starting with 5cb57667a0c497f3babb4f09d750db93dab0d1fefc7355aec543824d85d9329a not found: ID does not exist" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.051703 4624 scope.go:117] "RemoveContainer" containerID="7f6422f13f60736704c9faa9db148924fbe216d9bd09ebbbe224b817830d6d20" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.055556 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wt4rs"] Oct 08 14:27:31 crc kubenswrapper[4624]: E1008 14:27:31.058900 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f6422f13f60736704c9faa9db148924fbe216d9bd09ebbbe224b817830d6d20\": container with ID starting with 7f6422f13f60736704c9faa9db148924fbe216d9bd09ebbbe224b817830d6d20 not found: ID does not exist" containerID="7f6422f13f60736704c9faa9db148924fbe216d9bd09ebbbe224b817830d6d20" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.059177 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f6422f13f60736704c9faa9db148924fbe216d9bd09ebbbe224b817830d6d20"} err="failed to get container status \"7f6422f13f60736704c9faa9db148924fbe216d9bd09ebbbe224b817830d6d20\": rpc error: code = NotFound desc = could not find container \"7f6422f13f60736704c9faa9db148924fbe216d9bd09ebbbe224b817830d6d20\": container with ID starting with 7f6422f13f60736704c9faa9db148924fbe216d9bd09ebbbe224b817830d6d20 not found: ID does not exist" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.059264 4624 scope.go:117] "RemoveContainer" containerID="134135c13aee222daf6649a7cec24fa2aacd2129947193dbe31112f637d6ee1a" Oct 08 14:27:31 crc kubenswrapper[4624]: E1008 14:27:31.059722 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"134135c13aee222daf6649a7cec24fa2aacd2129947193dbe31112f637d6ee1a\": container with ID starting with 134135c13aee222daf6649a7cec24fa2aacd2129947193dbe31112f637d6ee1a not found: ID does not exist" containerID="134135c13aee222daf6649a7cec24fa2aacd2129947193dbe31112f637d6ee1a" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.059823 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134135c13aee222daf6649a7cec24fa2aacd2129947193dbe31112f637d6ee1a"} err="failed to get container status \"134135c13aee222daf6649a7cec24fa2aacd2129947193dbe31112f637d6ee1a\": rpc error: code = NotFound desc = could not find container \"134135c13aee222daf6649a7cec24fa2aacd2129947193dbe31112f637d6ee1a\": container with ID starting with 134135c13aee222daf6649a7cec24fa2aacd2129947193dbe31112f637d6ee1a not found: ID does not exist" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.059920 4624 scope.go:117] "RemoveContainer" containerID="fb3f7b70848b0f580520d456475cf7cd9559a0de6dce62cd54e624f314ad0444" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.075789 4624 scope.go:117] "RemoveContainer" containerID="fb3f7b70848b0f580520d456475cf7cd9559a0de6dce62cd54e624f314ad0444" Oct 08 14:27:31 crc kubenswrapper[4624]: E1008 14:27:31.076076 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb3f7b70848b0f580520d456475cf7cd9559a0de6dce62cd54e624f314ad0444\": container with ID starting with fb3f7b70848b0f580520d456475cf7cd9559a0de6dce62cd54e624f314ad0444 not found: ID does not exist" containerID="fb3f7b70848b0f580520d456475cf7cd9559a0de6dce62cd54e624f314ad0444" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.076107 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb3f7b70848b0f580520d456475cf7cd9559a0de6dce62cd54e624f314ad0444"} err="failed to get container status \"fb3f7b70848b0f580520d456475cf7cd9559a0de6dce62cd54e624f314ad0444\": rpc error: code = NotFound desc = could not find container \"fb3f7b70848b0f580520d456475cf7cd9559a0de6dce62cd54e624f314ad0444\": container with ID starting with fb3f7b70848b0f580520d456475cf7cd9559a0de6dce62cd54e624f314ad0444 not found: ID does not exist" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.085244 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e21258b9-106f-4b15-aa2c-7e65598341c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e21258b9-106f-4b15-aa2c-7e65598341c2" (UID: "e21258b9-106f-4b15-aa2c-7e65598341c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.094543 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x6fbb"] Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.100053 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x6fbb"] Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.159475 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e21258b9-106f-4b15-aa2c-7e65598341c2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.472231 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78f43ab9-6c12-4d67-9c33-b186ebcef93c" path="/var/lib/kubelet/pods/78f43ab9-6c12-4d67-9c33-b186ebcef93c/volumes" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.472975 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25349b6-d167-4846-884c-9c057b5c6491" path="/var/lib/kubelet/pods/a25349b6-d167-4846-884c-9c057b5c6491/volumes" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.473707 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae5a1b04-50e9-45ad-abb5-4a71d58f2a31" path="/var/lib/kubelet/pods/ae5a1b04-50e9-45ad-abb5-4a71d58f2a31/volumes" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.667578 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dk74n"] Oct 08 14:27:31 crc kubenswrapper[4624]: E1008 14:27:31.668118 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25349b6-d167-4846-884c-9c057b5c6491" containerName="registry-server" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.668133 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25349b6-d167-4846-884c-9c057b5c6491" containerName="registry-server" Oct 08 14:27:31 crc kubenswrapper[4624]: E1008 14:27:31.668143 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e21258b9-106f-4b15-aa2c-7e65598341c2" containerName="extract-utilities" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.668150 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e21258b9-106f-4b15-aa2c-7e65598341c2" containerName="extract-utilities" Oct 08 14:27:31 crc kubenswrapper[4624]: E1008 14:27:31.668157 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76cec727-339f-4269-b0ee-d4aec3b0d6e3" containerName="extract-content" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.668183 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="76cec727-339f-4269-b0ee-d4aec3b0d6e3" containerName="extract-content" Oct 08 14:27:31 crc kubenswrapper[4624]: E1008 14:27:31.668199 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76cec727-339f-4269-b0ee-d4aec3b0d6e3" containerName="registry-server" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.668208 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="76cec727-339f-4269-b0ee-d4aec3b0d6e3" containerName="registry-server" Oct 08 14:27:31 crc kubenswrapper[4624]: E1008 14:27:31.668217 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25349b6-d167-4846-884c-9c057b5c6491" containerName="extract-content" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.668224 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25349b6-d167-4846-884c-9c057b5c6491" containerName="extract-content" Oct 08 14:27:31 crc kubenswrapper[4624]: E1008 14:27:31.668237 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e21258b9-106f-4b15-aa2c-7e65598341c2" containerName="registry-server" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.668245 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e21258b9-106f-4b15-aa2c-7e65598341c2" containerName="registry-server" Oct 08 14:27:31 crc kubenswrapper[4624]: E1008 14:27:31.668256 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25349b6-d167-4846-884c-9c057b5c6491" containerName="extract-utilities" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.668264 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25349b6-d167-4846-884c-9c057b5c6491" containerName="extract-utilities" Oct 08 14:27:31 crc kubenswrapper[4624]: E1008 14:27:31.668273 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78f43ab9-6c12-4d67-9c33-b186ebcef93c" containerName="extract-utilities" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.668281 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="78f43ab9-6c12-4d67-9c33-b186ebcef93c" containerName="extract-utilities" Oct 08 14:27:31 crc kubenswrapper[4624]: E1008 14:27:31.668293 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e21258b9-106f-4b15-aa2c-7e65598341c2" containerName="extract-content" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.668300 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e21258b9-106f-4b15-aa2c-7e65598341c2" containerName="extract-content" Oct 08 14:27:31 crc kubenswrapper[4624]: E1008 14:27:31.668315 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78f43ab9-6c12-4d67-9c33-b186ebcef93c" containerName="registry-server" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.668322 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="78f43ab9-6c12-4d67-9c33-b186ebcef93c" containerName="registry-server" Oct 08 14:27:31 crc kubenswrapper[4624]: E1008 14:27:31.668333 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76cec727-339f-4269-b0ee-d4aec3b0d6e3" containerName="extract-utilities" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.668340 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="76cec727-339f-4269-b0ee-d4aec3b0d6e3" containerName="extract-utilities" Oct 08 14:27:31 crc kubenswrapper[4624]: E1008 14:27:31.668348 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae5a1b04-50e9-45ad-abb5-4a71d58f2a31" containerName="marketplace-operator" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.668355 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae5a1b04-50e9-45ad-abb5-4a71d58f2a31" containerName="marketplace-operator" Oct 08 14:27:31 crc kubenswrapper[4624]: E1008 14:27:31.668364 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78f43ab9-6c12-4d67-9c33-b186ebcef93c" containerName="extract-content" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.668371 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="78f43ab9-6c12-4d67-9c33-b186ebcef93c" containerName="extract-content" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.668495 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e21258b9-106f-4b15-aa2c-7e65598341c2" containerName="registry-server" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.668510 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="76cec727-339f-4269-b0ee-d4aec3b0d6e3" containerName="registry-server" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.668519 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25349b6-d167-4846-884c-9c057b5c6491" containerName="registry-server" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.668532 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="78f43ab9-6c12-4d67-9c33-b186ebcef93c" containerName="registry-server" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.668541 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae5a1b04-50e9-45ad-abb5-4a71d58f2a31" containerName="marketplace-operator" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.669247 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk74n" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.679237 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.724523 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dk74n"] Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.761985 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mwws" event={"ID":"e21258b9-106f-4b15-aa2c-7e65598341c2","Type":"ContainerDied","Data":"7cb2831375c421f121c7940bd6a4d3cad38483d05aaa8707b0f1913c86ceb8d8"} Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.762044 4624 scope.go:117] "RemoveContainer" containerID="4250685d2388500ae6c7cb5a736021d82d74c900b0e112026edaf6d10c99182d" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.761999 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mwws" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.765519 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72543833-c00b-453e-af36-5b6f32dd3d71-catalog-content\") pod \"certified-operators-dk74n\" (UID: \"72543833-c00b-453e-af36-5b6f32dd3d71\") " pod="openshift-marketplace/certified-operators-dk74n" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.765591 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72543833-c00b-453e-af36-5b6f32dd3d71-utilities\") pod \"certified-operators-dk74n\" (UID: \"72543833-c00b-453e-af36-5b6f32dd3d71\") " pod="openshift-marketplace/certified-operators-dk74n" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.765614 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h472k\" (UniqueName: \"kubernetes.io/projected/72543833-c00b-453e-af36-5b6f32dd3d71-kube-api-access-h472k\") pod \"certified-operators-dk74n\" (UID: \"72543833-c00b-453e-af36-5b6f32dd3d71\") " pod="openshift-marketplace/certified-operators-dk74n" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.766489 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5npx6" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.774459 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cwbr6" event={"ID":"348b94bb-ae57-4f52-8592-53abc49b97d0","Type":"ContainerStarted","Data":"0886f26eca79cbf2359bf7e1c37dcfbfa3335b2f915d66c327aad894af804bb1"} Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.774506 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cwbr6" event={"ID":"348b94bb-ae57-4f52-8592-53abc49b97d0","Type":"ContainerStarted","Data":"0c835d7da841565400b29c8e34ffe3c217a58a2c6945410d3a915e79912ba3ff"} Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.776411 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cwbr6" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.776597 4624 scope.go:117] "RemoveContainer" containerID="2d8293d83c301a9b80190af6cf1c78fcadc2b004216bece85fe910aebc555a58" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.780399 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cwbr6" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.795800 4624 scope.go:117] "RemoveContainer" containerID="da8f4f81c3b5e5e6e774bbceb0c03c560db3b333aed6b8ac5b3a67be7dbc7b06" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.795987 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7mwws"] Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.805006 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7mwws"] Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.820913 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5npx6"] Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.827420 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5npx6"] Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.832165 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cwbr6" podStartSLOduration=1.832150784 podStartE2EDuration="1.832150784s" podCreationTimestamp="2025-10-08 14:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:27:31.829292164 +0000 UTC m=+276.980227231" watchObservedRunningTime="2025-10-08 14:27:31.832150784 +0000 UTC m=+276.983085861" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.867217 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72543833-c00b-453e-af36-5b6f32dd3d71-utilities\") pod \"certified-operators-dk74n\" (UID: \"72543833-c00b-453e-af36-5b6f32dd3d71\") " pod="openshift-marketplace/certified-operators-dk74n" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.867265 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h472k\" (UniqueName: \"kubernetes.io/projected/72543833-c00b-453e-af36-5b6f32dd3d71-kube-api-access-h472k\") pod \"certified-operators-dk74n\" (UID: \"72543833-c00b-453e-af36-5b6f32dd3d71\") " pod="openshift-marketplace/certified-operators-dk74n" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.867330 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72543833-c00b-453e-af36-5b6f32dd3d71-catalog-content\") pod \"certified-operators-dk74n\" (UID: \"72543833-c00b-453e-af36-5b6f32dd3d71\") " pod="openshift-marketplace/certified-operators-dk74n" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.868469 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72543833-c00b-453e-af36-5b6f32dd3d71-utilities\") pod \"certified-operators-dk74n\" (UID: \"72543833-c00b-453e-af36-5b6f32dd3d71\") " pod="openshift-marketplace/certified-operators-dk74n" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.868538 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72543833-c00b-453e-af36-5b6f32dd3d71-catalog-content\") pod \"certified-operators-dk74n\" (UID: \"72543833-c00b-453e-af36-5b6f32dd3d71\") " pod="openshift-marketplace/certified-operators-dk74n" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.900742 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h472k\" (UniqueName: \"kubernetes.io/projected/72543833-c00b-453e-af36-5b6f32dd3d71-kube-api-access-h472k\") pod \"certified-operators-dk74n\" (UID: \"72543833-c00b-453e-af36-5b6f32dd3d71\") " pod="openshift-marketplace/certified-operators-dk74n" Oct 08 14:27:31 crc kubenswrapper[4624]: I1008 14:27:31.981922 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk74n" Oct 08 14:27:32 crc kubenswrapper[4624]: I1008 14:27:32.413388 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dk74n"] Oct 08 14:27:32 crc kubenswrapper[4624]: I1008 14:27:32.781196 4624 generic.go:334] "Generic (PLEG): container finished" podID="72543833-c00b-453e-af36-5b6f32dd3d71" containerID="2d2aa07baeae2c8471911d86cd6502b78448a52143437a1958e9d93b210b8b16" exitCode=0 Oct 08 14:27:32 crc kubenswrapper[4624]: I1008 14:27:32.781347 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk74n" event={"ID":"72543833-c00b-453e-af36-5b6f32dd3d71","Type":"ContainerDied","Data":"2d2aa07baeae2c8471911d86cd6502b78448a52143437a1958e9d93b210b8b16"} Oct 08 14:27:32 crc kubenswrapper[4624]: I1008 14:27:32.781557 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk74n" event={"ID":"72543833-c00b-453e-af36-5b6f32dd3d71","Type":"ContainerStarted","Data":"0d82c634e5e9c8f711f4ab7b337a8aa018dd9e0d006dcbed9af3ff08e6e184a2"} Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.058715 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ld57t"] Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.062190 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ld57t" Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.066492 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.069775 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ld57t"] Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.191627 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b07dc5b-db1a-4f6b-be6c-660723b543b3-utilities\") pod \"redhat-marketplace-ld57t\" (UID: \"4b07dc5b-db1a-4f6b-be6c-660723b543b3\") " pod="openshift-marketplace/redhat-marketplace-ld57t" Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.191699 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b58nx\" (UniqueName: \"kubernetes.io/projected/4b07dc5b-db1a-4f6b-be6c-660723b543b3-kube-api-access-b58nx\") pod \"redhat-marketplace-ld57t\" (UID: \"4b07dc5b-db1a-4f6b-be6c-660723b543b3\") " pod="openshift-marketplace/redhat-marketplace-ld57t" Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.191742 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b07dc5b-db1a-4f6b-be6c-660723b543b3-catalog-content\") pod \"redhat-marketplace-ld57t\" (UID: \"4b07dc5b-db1a-4f6b-be6c-660723b543b3\") " pod="openshift-marketplace/redhat-marketplace-ld57t" Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.293448 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b07dc5b-db1a-4f6b-be6c-660723b543b3-catalog-content\") pod \"redhat-marketplace-ld57t\" (UID: \"4b07dc5b-db1a-4f6b-be6c-660723b543b3\") " pod="openshift-marketplace/redhat-marketplace-ld57t" Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.293537 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b07dc5b-db1a-4f6b-be6c-660723b543b3-utilities\") pod \"redhat-marketplace-ld57t\" (UID: \"4b07dc5b-db1a-4f6b-be6c-660723b543b3\") " pod="openshift-marketplace/redhat-marketplace-ld57t" Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.293590 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b58nx\" (UniqueName: \"kubernetes.io/projected/4b07dc5b-db1a-4f6b-be6c-660723b543b3-kube-api-access-b58nx\") pod \"redhat-marketplace-ld57t\" (UID: \"4b07dc5b-db1a-4f6b-be6c-660723b543b3\") " pod="openshift-marketplace/redhat-marketplace-ld57t" Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.294159 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b07dc5b-db1a-4f6b-be6c-660723b543b3-catalog-content\") pod \"redhat-marketplace-ld57t\" (UID: \"4b07dc5b-db1a-4f6b-be6c-660723b543b3\") " pod="openshift-marketplace/redhat-marketplace-ld57t" Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.294228 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b07dc5b-db1a-4f6b-be6c-660723b543b3-utilities\") pod \"redhat-marketplace-ld57t\" (UID: \"4b07dc5b-db1a-4f6b-be6c-660723b543b3\") " pod="openshift-marketplace/redhat-marketplace-ld57t" Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.318835 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b58nx\" (UniqueName: \"kubernetes.io/projected/4b07dc5b-db1a-4f6b-be6c-660723b543b3-kube-api-access-b58nx\") pod \"redhat-marketplace-ld57t\" (UID: \"4b07dc5b-db1a-4f6b-be6c-660723b543b3\") " pod="openshift-marketplace/redhat-marketplace-ld57t" Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.392831 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ld57t" Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.486289 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76cec727-339f-4269-b0ee-d4aec3b0d6e3" path="/var/lib/kubelet/pods/76cec727-339f-4269-b0ee-d4aec3b0d6e3/volumes" Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.487082 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e21258b9-106f-4b15-aa2c-7e65598341c2" path="/var/lib/kubelet/pods/e21258b9-106f-4b15-aa2c-7e65598341c2/volumes" Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.793360 4624 generic.go:334] "Generic (PLEG): container finished" podID="72543833-c00b-453e-af36-5b6f32dd3d71" containerID="5d1a4fa65a19032840f8e66c07dfaeaa835a030e38cff2590af6c2f9a97bd79e" exitCode=0 Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.793441 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk74n" event={"ID":"72543833-c00b-453e-af36-5b6f32dd3d71","Type":"ContainerDied","Data":"5d1a4fa65a19032840f8e66c07dfaeaa835a030e38cff2590af6c2f9a97bd79e"} Oct 08 14:27:33 crc kubenswrapper[4624]: I1008 14:27:33.807049 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ld57t"] Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.068891 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n7zcs"] Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.073143 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7zcs" Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.074030 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n7zcs"] Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.076299 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.205528 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsdhf\" (UniqueName: \"kubernetes.io/projected/8c4b2af3-b2f0-4f10-9b2b-81a83483be29-kube-api-access-nsdhf\") pod \"redhat-operators-n7zcs\" (UID: \"8c4b2af3-b2f0-4f10-9b2b-81a83483be29\") " pod="openshift-marketplace/redhat-operators-n7zcs" Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.205584 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c4b2af3-b2f0-4f10-9b2b-81a83483be29-utilities\") pod \"redhat-operators-n7zcs\" (UID: \"8c4b2af3-b2f0-4f10-9b2b-81a83483be29\") " pod="openshift-marketplace/redhat-operators-n7zcs" Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.205859 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c4b2af3-b2f0-4f10-9b2b-81a83483be29-catalog-content\") pod \"redhat-operators-n7zcs\" (UID: \"8c4b2af3-b2f0-4f10-9b2b-81a83483be29\") " pod="openshift-marketplace/redhat-operators-n7zcs" Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.308227 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsdhf\" (UniqueName: \"kubernetes.io/projected/8c4b2af3-b2f0-4f10-9b2b-81a83483be29-kube-api-access-nsdhf\") pod \"redhat-operators-n7zcs\" (UID: \"8c4b2af3-b2f0-4f10-9b2b-81a83483be29\") " pod="openshift-marketplace/redhat-operators-n7zcs" Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.308280 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c4b2af3-b2f0-4f10-9b2b-81a83483be29-utilities\") pod \"redhat-operators-n7zcs\" (UID: \"8c4b2af3-b2f0-4f10-9b2b-81a83483be29\") " pod="openshift-marketplace/redhat-operators-n7zcs" Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.308325 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c4b2af3-b2f0-4f10-9b2b-81a83483be29-catalog-content\") pod \"redhat-operators-n7zcs\" (UID: \"8c4b2af3-b2f0-4f10-9b2b-81a83483be29\") " pod="openshift-marketplace/redhat-operators-n7zcs" Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.309333 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c4b2af3-b2f0-4f10-9b2b-81a83483be29-utilities\") pod \"redhat-operators-n7zcs\" (UID: \"8c4b2af3-b2f0-4f10-9b2b-81a83483be29\") " pod="openshift-marketplace/redhat-operators-n7zcs" Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.309402 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c4b2af3-b2f0-4f10-9b2b-81a83483be29-catalog-content\") pod \"redhat-operators-n7zcs\" (UID: \"8c4b2af3-b2f0-4f10-9b2b-81a83483be29\") " pod="openshift-marketplace/redhat-operators-n7zcs" Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.334830 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsdhf\" (UniqueName: \"kubernetes.io/projected/8c4b2af3-b2f0-4f10-9b2b-81a83483be29-kube-api-access-nsdhf\") pod \"redhat-operators-n7zcs\" (UID: \"8c4b2af3-b2f0-4f10-9b2b-81a83483be29\") " pod="openshift-marketplace/redhat-operators-n7zcs" Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.388941 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7zcs" Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.625120 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n7zcs"] Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.801017 4624 generic.go:334] "Generic (PLEG): container finished" podID="8c4b2af3-b2f0-4f10-9b2b-81a83483be29" containerID="ff37c149115221e7e61559a78bd9123907c70df34efcfe7047214d49ff62b637" exitCode=0 Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.801139 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7zcs" event={"ID":"8c4b2af3-b2f0-4f10-9b2b-81a83483be29","Type":"ContainerDied","Data":"ff37c149115221e7e61559a78bd9123907c70df34efcfe7047214d49ff62b637"} Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.801178 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7zcs" event={"ID":"8c4b2af3-b2f0-4f10-9b2b-81a83483be29","Type":"ContainerStarted","Data":"5e4a621ca1321e7797ff37678a7fffba23798228d01822eee1f073c15ee32363"} Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.805288 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk74n" event={"ID":"72543833-c00b-453e-af36-5b6f32dd3d71","Type":"ContainerStarted","Data":"d6484a26988c13526057ca408a2c1131b7093175f88a851f4840528cd28e59a8"} Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.808456 4624 generic.go:334] "Generic (PLEG): container finished" podID="4b07dc5b-db1a-4f6b-be6c-660723b543b3" containerID="ca26a71f835ed01ba6f164b787ac1322295e9a23d61c524ada0eb96ddb97b97a" exitCode=0 Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.808514 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ld57t" event={"ID":"4b07dc5b-db1a-4f6b-be6c-660723b543b3","Type":"ContainerDied","Data":"ca26a71f835ed01ba6f164b787ac1322295e9a23d61c524ada0eb96ddb97b97a"} Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.808539 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ld57t" event={"ID":"4b07dc5b-db1a-4f6b-be6c-660723b543b3","Type":"ContainerStarted","Data":"20dbcfabb5dbd4819a3500064900ff4abd3095e8430d983e048fa5a699cd464f"} Oct 08 14:27:34 crc kubenswrapper[4624]: I1008 14:27:34.849345 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dk74n" podStartSLOduration=2.3298026099999998 podStartE2EDuration="3.849328447s" podCreationTimestamp="2025-10-08 14:27:31 +0000 UTC" firstStartedPulling="2025-10-08 14:27:32.782874426 +0000 UTC m=+277.933809503" lastFinishedPulling="2025-10-08 14:27:34.302400263 +0000 UTC m=+279.453335340" observedRunningTime="2025-10-08 14:27:34.845520891 +0000 UTC m=+279.996455968" watchObservedRunningTime="2025-10-08 14:27:34.849328447 +0000 UTC m=+280.000263524" Oct 08 14:27:35 crc kubenswrapper[4624]: I1008 14:27:35.463396 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s82n5"] Oct 08 14:27:35 crc kubenswrapper[4624]: I1008 14:27:35.465705 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s82n5" Oct 08 14:27:35 crc kubenswrapper[4624]: I1008 14:27:35.471320 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 08 14:27:35 crc kubenswrapper[4624]: I1008 14:27:35.478048 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s82n5"] Oct 08 14:27:35 crc kubenswrapper[4624]: I1008 14:27:35.529516 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6784a613-240f-4d11-9b3c-bf9f99a646e6-catalog-content\") pod \"community-operators-s82n5\" (UID: \"6784a613-240f-4d11-9b3c-bf9f99a646e6\") " pod="openshift-marketplace/community-operators-s82n5" Oct 08 14:27:35 crc kubenswrapper[4624]: I1008 14:27:35.529578 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpn8l\" (UniqueName: \"kubernetes.io/projected/6784a613-240f-4d11-9b3c-bf9f99a646e6-kube-api-access-hpn8l\") pod \"community-operators-s82n5\" (UID: \"6784a613-240f-4d11-9b3c-bf9f99a646e6\") " pod="openshift-marketplace/community-operators-s82n5" Oct 08 14:27:35 crc kubenswrapper[4624]: I1008 14:27:35.529622 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6784a613-240f-4d11-9b3c-bf9f99a646e6-utilities\") pod \"community-operators-s82n5\" (UID: \"6784a613-240f-4d11-9b3c-bf9f99a646e6\") " pod="openshift-marketplace/community-operators-s82n5" Oct 08 14:27:35 crc kubenswrapper[4624]: I1008 14:27:35.630652 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6784a613-240f-4d11-9b3c-bf9f99a646e6-catalog-content\") pod \"community-operators-s82n5\" (UID: \"6784a613-240f-4d11-9b3c-bf9f99a646e6\") " pod="openshift-marketplace/community-operators-s82n5" Oct 08 14:27:35 crc kubenswrapper[4624]: I1008 14:27:35.630710 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpn8l\" (UniqueName: \"kubernetes.io/projected/6784a613-240f-4d11-9b3c-bf9f99a646e6-kube-api-access-hpn8l\") pod \"community-operators-s82n5\" (UID: \"6784a613-240f-4d11-9b3c-bf9f99a646e6\") " pod="openshift-marketplace/community-operators-s82n5" Oct 08 14:27:35 crc kubenswrapper[4624]: I1008 14:27:35.630754 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6784a613-240f-4d11-9b3c-bf9f99a646e6-utilities\") pod \"community-operators-s82n5\" (UID: \"6784a613-240f-4d11-9b3c-bf9f99a646e6\") " pod="openshift-marketplace/community-operators-s82n5" Oct 08 14:27:35 crc kubenswrapper[4624]: I1008 14:27:35.631207 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6784a613-240f-4d11-9b3c-bf9f99a646e6-catalog-content\") pod \"community-operators-s82n5\" (UID: \"6784a613-240f-4d11-9b3c-bf9f99a646e6\") " pod="openshift-marketplace/community-operators-s82n5" Oct 08 14:27:35 crc kubenswrapper[4624]: I1008 14:27:35.631236 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6784a613-240f-4d11-9b3c-bf9f99a646e6-utilities\") pod \"community-operators-s82n5\" (UID: \"6784a613-240f-4d11-9b3c-bf9f99a646e6\") " pod="openshift-marketplace/community-operators-s82n5" Oct 08 14:27:35 crc kubenswrapper[4624]: I1008 14:27:35.659782 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpn8l\" (UniqueName: \"kubernetes.io/projected/6784a613-240f-4d11-9b3c-bf9f99a646e6-kube-api-access-hpn8l\") pod \"community-operators-s82n5\" (UID: \"6784a613-240f-4d11-9b3c-bf9f99a646e6\") " pod="openshift-marketplace/community-operators-s82n5" Oct 08 14:27:35 crc kubenswrapper[4624]: I1008 14:27:35.788729 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s82n5" Oct 08 14:27:36 crc kubenswrapper[4624]: I1008 14:27:36.055271 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s82n5"] Oct 08 14:27:36 crc kubenswrapper[4624]: W1008 14:27:36.064003 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6784a613_240f_4d11_9b3c_bf9f99a646e6.slice/crio-7daa6273bd66a7655d09139bda245858b9a62b47db01ad78903c273f88104ba5 WatchSource:0}: Error finding container 7daa6273bd66a7655d09139bda245858b9a62b47db01ad78903c273f88104ba5: Status 404 returned error can't find the container with id 7daa6273bd66a7655d09139bda245858b9a62b47db01ad78903c273f88104ba5 Oct 08 14:27:36 crc kubenswrapper[4624]: I1008 14:27:36.820880 4624 generic.go:334] "Generic (PLEG): container finished" podID="4b07dc5b-db1a-4f6b-be6c-660723b543b3" containerID="cdcfbcd9746b6c26898d1ce3a1876022b70300d2654a9ad89bb5d4e8212b1f75" exitCode=0 Oct 08 14:27:36 crc kubenswrapper[4624]: I1008 14:27:36.821070 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ld57t" event={"ID":"4b07dc5b-db1a-4f6b-be6c-660723b543b3","Type":"ContainerDied","Data":"cdcfbcd9746b6c26898d1ce3a1876022b70300d2654a9ad89bb5d4e8212b1f75"} Oct 08 14:27:36 crc kubenswrapper[4624]: I1008 14:27:36.825029 4624 generic.go:334] "Generic (PLEG): container finished" podID="8c4b2af3-b2f0-4f10-9b2b-81a83483be29" containerID="a3d9c8ac3cd0297755cf953eab1c74cc923d907c4070a38cd1455aacfae0ac81" exitCode=0 Oct 08 14:27:36 crc kubenswrapper[4624]: I1008 14:27:36.825096 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7zcs" event={"ID":"8c4b2af3-b2f0-4f10-9b2b-81a83483be29","Type":"ContainerDied","Data":"a3d9c8ac3cd0297755cf953eab1c74cc923d907c4070a38cd1455aacfae0ac81"} Oct 08 14:27:36 crc kubenswrapper[4624]: I1008 14:27:36.833472 4624 generic.go:334] "Generic (PLEG): container finished" podID="6784a613-240f-4d11-9b3c-bf9f99a646e6" containerID="d8c7297ac3c3a4066f3b42cf445079321f3fcc158b69bf1c14c214653d9f2fb5" exitCode=0 Oct 08 14:27:36 crc kubenswrapper[4624]: I1008 14:27:36.833511 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s82n5" event={"ID":"6784a613-240f-4d11-9b3c-bf9f99a646e6","Type":"ContainerDied","Data":"d8c7297ac3c3a4066f3b42cf445079321f3fcc158b69bf1c14c214653d9f2fb5"} Oct 08 14:27:36 crc kubenswrapper[4624]: I1008 14:27:36.833535 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s82n5" event={"ID":"6784a613-240f-4d11-9b3c-bf9f99a646e6","Type":"ContainerStarted","Data":"7daa6273bd66a7655d09139bda245858b9a62b47db01ad78903c273f88104ba5"} Oct 08 14:27:37 crc kubenswrapper[4624]: I1008 14:27:37.840819 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s82n5" event={"ID":"6784a613-240f-4d11-9b3c-bf9f99a646e6","Type":"ContainerStarted","Data":"72f6c43a487ba415858cbfb52ed91f9d4d168dd4c56a4419d00e348cab16469b"} Oct 08 14:27:37 crc kubenswrapper[4624]: I1008 14:27:37.843349 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ld57t" event={"ID":"4b07dc5b-db1a-4f6b-be6c-660723b543b3","Type":"ContainerStarted","Data":"bade52a46ad664157506c5974f802ad184e52ab45fb5ab41b7f2cf596c9543e0"} Oct 08 14:27:37 crc kubenswrapper[4624]: I1008 14:27:37.846480 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7zcs" event={"ID":"8c4b2af3-b2f0-4f10-9b2b-81a83483be29","Type":"ContainerStarted","Data":"4c96072e6ea6f37cc5437ee8e26a61b1e6b8b01d06dfd74ac2c490d5ae61f28f"} Oct 08 14:27:37 crc kubenswrapper[4624]: I1008 14:27:37.881185 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ld57t" podStartSLOduration=2.445956488 podStartE2EDuration="4.881166319s" podCreationTimestamp="2025-10-08 14:27:33 +0000 UTC" firstStartedPulling="2025-10-08 14:27:34.810048401 +0000 UTC m=+279.960983478" lastFinishedPulling="2025-10-08 14:27:37.245258232 +0000 UTC m=+282.396193309" observedRunningTime="2025-10-08 14:27:37.878504105 +0000 UTC m=+283.029439192" watchObservedRunningTime="2025-10-08 14:27:37.881166319 +0000 UTC m=+283.032101396" Oct 08 14:27:38 crc kubenswrapper[4624]: I1008 14:27:38.855708 4624 generic.go:334] "Generic (PLEG): container finished" podID="6784a613-240f-4d11-9b3c-bf9f99a646e6" containerID="72f6c43a487ba415858cbfb52ed91f9d4d168dd4c56a4419d00e348cab16469b" exitCode=0 Oct 08 14:27:38 crc kubenswrapper[4624]: I1008 14:27:38.858421 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s82n5" event={"ID":"6784a613-240f-4d11-9b3c-bf9f99a646e6","Type":"ContainerDied","Data":"72f6c43a487ba415858cbfb52ed91f9d4d168dd4c56a4419d00e348cab16469b"} Oct 08 14:27:38 crc kubenswrapper[4624]: I1008 14:27:38.877434 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n7zcs" podStartSLOduration=2.3771671469999998 podStartE2EDuration="4.877415263s" podCreationTimestamp="2025-10-08 14:27:34 +0000 UTC" firstStartedPulling="2025-10-08 14:27:34.802408478 +0000 UTC m=+279.953343565" lastFinishedPulling="2025-10-08 14:27:37.302656604 +0000 UTC m=+282.453591681" observedRunningTime="2025-10-08 14:27:37.905094957 +0000 UTC m=+283.056030034" watchObservedRunningTime="2025-10-08 14:27:38.877415263 +0000 UTC m=+284.028350340" Oct 08 14:27:39 crc kubenswrapper[4624]: I1008 14:27:39.864969 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s82n5" event={"ID":"6784a613-240f-4d11-9b3c-bf9f99a646e6","Type":"ContainerStarted","Data":"983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868"} Oct 08 14:27:39 crc kubenswrapper[4624]: I1008 14:27:39.893600 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s82n5" podStartSLOduration=2.35988847 podStartE2EDuration="4.893567991s" podCreationTimestamp="2025-10-08 14:27:35 +0000 UTC" firstStartedPulling="2025-10-08 14:27:36.83526593 +0000 UTC m=+281.986201007" lastFinishedPulling="2025-10-08 14:27:39.368945451 +0000 UTC m=+284.519880528" observedRunningTime="2025-10-08 14:27:39.893346675 +0000 UTC m=+285.044281752" watchObservedRunningTime="2025-10-08 14:27:39.893567991 +0000 UTC m=+285.044503068" Oct 08 14:27:41 crc kubenswrapper[4624]: I1008 14:27:41.982804 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dk74n" Oct 08 14:27:41 crc kubenswrapper[4624]: I1008 14:27:41.983166 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dk74n" Oct 08 14:27:42 crc kubenswrapper[4624]: I1008 14:27:42.026836 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dk74n" Oct 08 14:27:42 crc kubenswrapper[4624]: I1008 14:27:42.912020 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dk74n" Oct 08 14:27:43 crc kubenswrapper[4624]: I1008 14:27:43.393243 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ld57t" Oct 08 14:27:43 crc kubenswrapper[4624]: I1008 14:27:43.393529 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ld57t" Oct 08 14:27:43 crc kubenswrapper[4624]: I1008 14:27:43.431700 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ld57t" Oct 08 14:27:43 crc kubenswrapper[4624]: I1008 14:27:43.916584 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ld57t" Oct 08 14:27:44 crc kubenswrapper[4624]: I1008 14:27:44.389541 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n7zcs" Oct 08 14:27:44 crc kubenswrapper[4624]: I1008 14:27:44.389976 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n7zcs" Oct 08 14:27:44 crc kubenswrapper[4624]: I1008 14:27:44.428020 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n7zcs" Oct 08 14:27:44 crc kubenswrapper[4624]: I1008 14:27:44.921181 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n7zcs" Oct 08 14:27:45 crc kubenswrapper[4624]: I1008 14:27:45.789725 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s82n5" Oct 08 14:27:45 crc kubenswrapper[4624]: I1008 14:27:45.789848 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s82n5" Oct 08 14:27:45 crc kubenswrapper[4624]: I1008 14:27:45.832969 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s82n5" Oct 08 14:27:45 crc kubenswrapper[4624]: I1008 14:27:45.937165 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s82n5" Oct 08 14:29:00 crc kubenswrapper[4624]: I1008 14:29:00.076388 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:29:00 crc kubenswrapper[4624]: I1008 14:29:00.077189 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:29:30 crc kubenswrapper[4624]: I1008 14:29:30.076265 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:29:30 crc kubenswrapper[4624]: I1008 14:29:30.076966 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.076901 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.077498 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.077551 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.078204 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d91f9aebe012c6850541ed64b094784a1cd737f36d0fb5aa87455cec8f6bdb54"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.078252 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://d91f9aebe012c6850541ed64b094784a1cd737f36d0fb5aa87455cec8f6bdb54" gracePeriod=600 Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.142742 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d"] Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.143687 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d" Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.146817 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.147928 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.160665 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d"] Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.302661 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/281517cb-af20-4881-9124-2a72b4a2a8e6-secret-volume\") pod \"collect-profiles-29332230-thl4d\" (UID: \"281517cb-af20-4881-9124-2a72b4a2a8e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d" Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.302963 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/281517cb-af20-4881-9124-2a72b4a2a8e6-config-volume\") pod \"collect-profiles-29332230-thl4d\" (UID: \"281517cb-af20-4881-9124-2a72b4a2a8e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d" Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.303011 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7t4t\" (UniqueName: \"kubernetes.io/projected/281517cb-af20-4881-9124-2a72b4a2a8e6-kube-api-access-l7t4t\") pod \"collect-profiles-29332230-thl4d\" (UID: \"281517cb-af20-4881-9124-2a72b4a2a8e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d" Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.403423 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7t4t\" (UniqueName: \"kubernetes.io/projected/281517cb-af20-4881-9124-2a72b4a2a8e6-kube-api-access-l7t4t\") pod \"collect-profiles-29332230-thl4d\" (UID: \"281517cb-af20-4881-9124-2a72b4a2a8e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d" Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.403519 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/281517cb-af20-4881-9124-2a72b4a2a8e6-secret-volume\") pod \"collect-profiles-29332230-thl4d\" (UID: \"281517cb-af20-4881-9124-2a72b4a2a8e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d" Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.403540 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/281517cb-af20-4881-9124-2a72b4a2a8e6-config-volume\") pod \"collect-profiles-29332230-thl4d\" (UID: \"281517cb-af20-4881-9124-2a72b4a2a8e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d" Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.404615 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/281517cb-af20-4881-9124-2a72b4a2a8e6-config-volume\") pod \"collect-profiles-29332230-thl4d\" (UID: \"281517cb-af20-4881-9124-2a72b4a2a8e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d" Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.409285 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/281517cb-af20-4881-9124-2a72b4a2a8e6-secret-volume\") pod \"collect-profiles-29332230-thl4d\" (UID: \"281517cb-af20-4881-9124-2a72b4a2a8e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d" Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.421180 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7t4t\" (UniqueName: \"kubernetes.io/projected/281517cb-af20-4881-9124-2a72b4a2a8e6-kube-api-access-l7t4t\") pod \"collect-profiles-29332230-thl4d\" (UID: \"281517cb-af20-4881-9124-2a72b4a2a8e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d" Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.474225 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d" Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.516262 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="d91f9aebe012c6850541ed64b094784a1cd737f36d0fb5aa87455cec8f6bdb54" exitCode=0 Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.516308 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"d91f9aebe012c6850541ed64b094784a1cd737f36d0fb5aa87455cec8f6bdb54"} Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.516357 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"1598da519a0e8e48b943e41d794035b7f04282326fb7b59d821c571033b04a9d"} Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.516373 4624 scope.go:117] "RemoveContainer" containerID="6bbbabbd0c38eaa3523bc09d84853fc663df2686af160980af2efa7f6938887e" Oct 08 14:30:00 crc kubenswrapper[4624]: I1008 14:30:00.689876 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d"] Oct 08 14:30:00 crc kubenswrapper[4624]: W1008 14:30:00.697222 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod281517cb_af20_4881_9124_2a72b4a2a8e6.slice/crio-4ceb848a0d9a14c6f44c02dcc3edd4803dbf6eaac7e59bbb38686ff2c3f65e2f WatchSource:0}: Error finding container 4ceb848a0d9a14c6f44c02dcc3edd4803dbf6eaac7e59bbb38686ff2c3f65e2f: Status 404 returned error can't find the container with id 4ceb848a0d9a14c6f44c02dcc3edd4803dbf6eaac7e59bbb38686ff2c3f65e2f Oct 08 14:30:01 crc kubenswrapper[4624]: I1008 14:30:01.525527 4624 generic.go:334] "Generic (PLEG): container finished" podID="281517cb-af20-4881-9124-2a72b4a2a8e6" containerID="ae5d9a05e4fd350d22c9c20e003f8dbf27080801e5e12e6321396608c3296ffe" exitCode=0 Oct 08 14:30:01 crc kubenswrapper[4624]: I1008 14:30:01.525658 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d" event={"ID":"281517cb-af20-4881-9124-2a72b4a2a8e6","Type":"ContainerDied","Data":"ae5d9a05e4fd350d22c9c20e003f8dbf27080801e5e12e6321396608c3296ffe"} Oct 08 14:30:01 crc kubenswrapper[4624]: I1008 14:30:01.526482 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d" event={"ID":"281517cb-af20-4881-9124-2a72b4a2a8e6","Type":"ContainerStarted","Data":"4ceb848a0d9a14c6f44c02dcc3edd4803dbf6eaac7e59bbb38686ff2c3f65e2f"} Oct 08 14:30:02 crc kubenswrapper[4624]: I1008 14:30:02.707944 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d" Oct 08 14:30:02 crc kubenswrapper[4624]: I1008 14:30:02.832537 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7t4t\" (UniqueName: \"kubernetes.io/projected/281517cb-af20-4881-9124-2a72b4a2a8e6-kube-api-access-l7t4t\") pod \"281517cb-af20-4881-9124-2a72b4a2a8e6\" (UID: \"281517cb-af20-4881-9124-2a72b4a2a8e6\") " Oct 08 14:30:02 crc kubenswrapper[4624]: I1008 14:30:02.832574 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/281517cb-af20-4881-9124-2a72b4a2a8e6-secret-volume\") pod \"281517cb-af20-4881-9124-2a72b4a2a8e6\" (UID: \"281517cb-af20-4881-9124-2a72b4a2a8e6\") " Oct 08 14:30:02 crc kubenswrapper[4624]: I1008 14:30:02.832708 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/281517cb-af20-4881-9124-2a72b4a2a8e6-config-volume\") pod \"281517cb-af20-4881-9124-2a72b4a2a8e6\" (UID: \"281517cb-af20-4881-9124-2a72b4a2a8e6\") " Oct 08 14:30:02 crc kubenswrapper[4624]: I1008 14:30:02.833471 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/281517cb-af20-4881-9124-2a72b4a2a8e6-config-volume" (OuterVolumeSpecName: "config-volume") pod "281517cb-af20-4881-9124-2a72b4a2a8e6" (UID: "281517cb-af20-4881-9124-2a72b4a2a8e6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:30:02 crc kubenswrapper[4624]: I1008 14:30:02.837450 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/281517cb-af20-4881-9124-2a72b4a2a8e6-kube-api-access-l7t4t" (OuterVolumeSpecName: "kube-api-access-l7t4t") pod "281517cb-af20-4881-9124-2a72b4a2a8e6" (UID: "281517cb-af20-4881-9124-2a72b4a2a8e6"). InnerVolumeSpecName "kube-api-access-l7t4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:30:02 crc kubenswrapper[4624]: I1008 14:30:02.840709 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/281517cb-af20-4881-9124-2a72b4a2a8e6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "281517cb-af20-4881-9124-2a72b4a2a8e6" (UID: "281517cb-af20-4881-9124-2a72b4a2a8e6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:30:02 crc kubenswrapper[4624]: I1008 14:30:02.934168 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/281517cb-af20-4881-9124-2a72b4a2a8e6-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 14:30:02 crc kubenswrapper[4624]: I1008 14:30:02.934207 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7t4t\" (UniqueName: \"kubernetes.io/projected/281517cb-af20-4881-9124-2a72b4a2a8e6-kube-api-access-l7t4t\") on node \"crc\" DevicePath \"\"" Oct 08 14:30:02 crc kubenswrapper[4624]: I1008 14:30:02.934228 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/281517cb-af20-4881-9124-2a72b4a2a8e6-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 14:30:03 crc kubenswrapper[4624]: I1008 14:30:03.539782 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d" event={"ID":"281517cb-af20-4881-9124-2a72b4a2a8e6","Type":"ContainerDied","Data":"4ceb848a0d9a14c6f44c02dcc3edd4803dbf6eaac7e59bbb38686ff2c3f65e2f"} Oct 08 14:30:03 crc kubenswrapper[4624]: I1008 14:30:03.539817 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ceb848a0d9a14c6f44c02dcc3edd4803dbf6eaac7e59bbb38686ff2c3f65e2f" Oct 08 14:30:03 crc kubenswrapper[4624]: I1008 14:30:03.539804 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.380331 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pdh9g"] Oct 08 14:31:07 crc kubenswrapper[4624]: E1008 14:31:07.381105 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="281517cb-af20-4881-9124-2a72b4a2a8e6" containerName="collect-profiles" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.381120 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="281517cb-af20-4881-9124-2a72b4a2a8e6" containerName="collect-profiles" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.381218 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="281517cb-af20-4881-9124-2a72b4a2a8e6" containerName="collect-profiles" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.381626 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.399795 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pdh9g"] Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.496014 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-trusted-ca\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.496126 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zftpk\" (UniqueName: \"kubernetes.io/projected/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-kube-api-access-zftpk\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.496172 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.496203 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-registry-tls\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.496236 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-bound-sa-token\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.496314 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-registry-certificates\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.496342 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.496377 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.519104 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.597065 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-trusted-ca\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.597139 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zftpk\" (UniqueName: \"kubernetes.io/projected/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-kube-api-access-zftpk\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.597165 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.597192 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-registry-tls\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.597217 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-bound-sa-token\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.597244 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-registry-certificates\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.597266 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.597787 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.599046 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-trusted-ca\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.601087 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-registry-certificates\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.612978 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-registry-tls\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.613571 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.615522 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-bound-sa-token\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.616238 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zftpk\" (UniqueName: \"kubernetes.io/projected/5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb-kube-api-access-zftpk\") pod \"image-registry-66df7c8f76-pdh9g\" (UID: \"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb\") " pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:07 crc kubenswrapper[4624]: I1008 14:31:07.703601 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:08 crc kubenswrapper[4624]: I1008 14:31:08.098029 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pdh9g"] Oct 08 14:31:08 crc kubenswrapper[4624]: I1008 14:31:08.825621 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" event={"ID":"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb","Type":"ContainerStarted","Data":"b26248ce10f028eae6701cdc3b718a584a0b92a6be35dcf4a2ab5c849a28a581"} Oct 08 14:31:08 crc kubenswrapper[4624]: I1008 14:31:08.825967 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" event={"ID":"5354cb20-9bd0-4b90-ae64-3a7e7e21b8eb","Type":"ContainerStarted","Data":"926355996da61f98c8fa9953e98de366817241ad1b761af4f46fdd93c1858707"} Oct 08 14:31:08 crc kubenswrapper[4624]: I1008 14:31:08.826627 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:08 crc kubenswrapper[4624]: I1008 14:31:08.844421 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" podStartSLOduration=1.844402337 podStartE2EDuration="1.844402337s" podCreationTimestamp="2025-10-08 14:31:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:31:08.842756894 +0000 UTC m=+493.993691961" watchObservedRunningTime="2025-10-08 14:31:08.844402337 +0000 UTC m=+493.995337414" Oct 08 14:31:27 crc kubenswrapper[4624]: I1008 14:31:27.712025 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-pdh9g" Oct 08 14:31:27 crc kubenswrapper[4624]: I1008 14:31:27.768303 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qr5w8"] Oct 08 14:31:52 crc kubenswrapper[4624]: I1008 14:31:52.805668 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" podUID="c439ba4c-3583-4d35-b586-c97f525345a6" containerName="registry" containerID="cri-o://0ece3a8f03bad1ddf9c09bae2d8371c3df08b9002ff3c8dbb5a250769bc4ff18" gracePeriod=30 Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.042088 4624 generic.go:334] "Generic (PLEG): container finished" podID="c439ba4c-3583-4d35-b586-c97f525345a6" containerID="0ece3a8f03bad1ddf9c09bae2d8371c3df08b9002ff3c8dbb5a250769bc4ff18" exitCode=0 Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.042411 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" event={"ID":"c439ba4c-3583-4d35-b586-c97f525345a6","Type":"ContainerDied","Data":"0ece3a8f03bad1ddf9c09bae2d8371c3df08b9002ff3c8dbb5a250769bc4ff18"} Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.110786 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.291897 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c439ba4c-3583-4d35-b586-c97f525345a6-bound-sa-token\") pod \"c439ba4c-3583-4d35-b586-c97f525345a6\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.291954 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c439ba4c-3583-4d35-b586-c97f525345a6-ca-trust-extracted\") pod \"c439ba4c-3583-4d35-b586-c97f525345a6\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.292155 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c439ba4c-3583-4d35-b586-c97f525345a6\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.292188 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vgqs\" (UniqueName: \"kubernetes.io/projected/c439ba4c-3583-4d35-b586-c97f525345a6-kube-api-access-4vgqs\") pod \"c439ba4c-3583-4d35-b586-c97f525345a6\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.292240 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c439ba4c-3583-4d35-b586-c97f525345a6-registry-certificates\") pod \"c439ba4c-3583-4d35-b586-c97f525345a6\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.292295 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c439ba4c-3583-4d35-b586-c97f525345a6-registry-tls\") pod \"c439ba4c-3583-4d35-b586-c97f525345a6\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.292324 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c439ba4c-3583-4d35-b586-c97f525345a6-trusted-ca\") pod \"c439ba4c-3583-4d35-b586-c97f525345a6\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.292369 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c439ba4c-3583-4d35-b586-c97f525345a6-installation-pull-secrets\") pod \"c439ba4c-3583-4d35-b586-c97f525345a6\" (UID: \"c439ba4c-3583-4d35-b586-c97f525345a6\") " Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.293134 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c439ba4c-3583-4d35-b586-c97f525345a6-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c439ba4c-3583-4d35-b586-c97f525345a6" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.293199 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c439ba4c-3583-4d35-b586-c97f525345a6-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c439ba4c-3583-4d35-b586-c97f525345a6" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.298054 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c439ba4c-3583-4d35-b586-c97f525345a6-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c439ba4c-3583-4d35-b586-c97f525345a6" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.298907 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c439ba4c-3583-4d35-b586-c97f525345a6-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c439ba4c-3583-4d35-b586-c97f525345a6" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.299046 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c439ba4c-3583-4d35-b586-c97f525345a6-kube-api-access-4vgqs" (OuterVolumeSpecName: "kube-api-access-4vgqs") pod "c439ba4c-3583-4d35-b586-c97f525345a6" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6"). InnerVolumeSpecName "kube-api-access-4vgqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.299328 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c439ba4c-3583-4d35-b586-c97f525345a6-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c439ba4c-3583-4d35-b586-c97f525345a6" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.301210 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c439ba4c-3583-4d35-b586-c97f525345a6" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.309072 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c439ba4c-3583-4d35-b586-c97f525345a6-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c439ba4c-3583-4d35-b586-c97f525345a6" (UID: "c439ba4c-3583-4d35-b586-c97f525345a6"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.394258 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c439ba4c-3583-4d35-b586-c97f525345a6-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.394296 4624 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c439ba4c-3583-4d35-b586-c97f525345a6-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.394309 4624 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c439ba4c-3583-4d35-b586-c97f525345a6-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.394320 4624 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c439ba4c-3583-4d35-b586-c97f525345a6-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.394331 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vgqs\" (UniqueName: \"kubernetes.io/projected/c439ba4c-3583-4d35-b586-c97f525345a6-kube-api-access-4vgqs\") on node \"crc\" DevicePath \"\"" Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.394341 4624 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c439ba4c-3583-4d35-b586-c97f525345a6-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 08 14:31:53 crc kubenswrapper[4624]: I1008 14:31:53.394354 4624 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c439ba4c-3583-4d35-b586-c97f525345a6-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 08 14:31:54 crc kubenswrapper[4624]: I1008 14:31:54.048783 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" event={"ID":"c439ba4c-3583-4d35-b586-c97f525345a6","Type":"ContainerDied","Data":"bb2a4bc4a71a0b56fa89976397b0f831d1c14726d67b7bb7d2e37b983c46f4bb"} Oct 08 14:31:54 crc kubenswrapper[4624]: I1008 14:31:54.048837 4624 scope.go:117] "RemoveContainer" containerID="0ece3a8f03bad1ddf9c09bae2d8371c3df08b9002ff3c8dbb5a250769bc4ff18" Oct 08 14:31:54 crc kubenswrapper[4624]: I1008 14:31:54.050238 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" Oct 08 14:31:54 crc kubenswrapper[4624]: I1008 14:31:54.064864 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qr5w8"] Oct 08 14:31:54 crc kubenswrapper[4624]: I1008 14:31:54.082837 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qr5w8"] Oct 08 14:31:55 crc kubenswrapper[4624]: I1008 14:31:55.472467 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c439ba4c-3583-4d35-b586-c97f525345a6" path="/var/lib/kubelet/pods/c439ba4c-3583-4d35-b586-c97f525345a6/volumes" Oct 08 14:31:58 crc kubenswrapper[4624]: I1008 14:31:58.058876 4624 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-qr5w8 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.28:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 08 14:31:58 crc kubenswrapper[4624]: I1008 14:31:58.058968 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-qr5w8" podUID="c439ba4c-3583-4d35-b586-c97f525345a6" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.28:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 08 14:32:00 crc kubenswrapper[4624]: I1008 14:32:00.076225 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:32:00 crc kubenswrapper[4624]: I1008 14:32:00.076539 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:32:30 crc kubenswrapper[4624]: I1008 14:32:30.076130 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:32:30 crc kubenswrapper[4624]: I1008 14:32:30.076718 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:33:00 crc kubenswrapper[4624]: I1008 14:33:00.077195 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:33:00 crc kubenswrapper[4624]: I1008 14:33:00.077677 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:33:00 crc kubenswrapper[4624]: I1008 14:33:00.077804 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:33:00 crc kubenswrapper[4624]: I1008 14:33:00.078386 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1598da519a0e8e48b943e41d794035b7f04282326fb7b59d821c571033b04a9d"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:33:00 crc kubenswrapper[4624]: I1008 14:33:00.078531 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://1598da519a0e8e48b943e41d794035b7f04282326fb7b59d821c571033b04a9d" gracePeriod=600 Oct 08 14:33:00 crc kubenswrapper[4624]: I1008 14:33:00.346575 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="1598da519a0e8e48b943e41d794035b7f04282326fb7b59d821c571033b04a9d" exitCode=0 Oct 08 14:33:00 crc kubenswrapper[4624]: I1008 14:33:00.346649 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"1598da519a0e8e48b943e41d794035b7f04282326fb7b59d821c571033b04a9d"} Oct 08 14:33:00 crc kubenswrapper[4624]: I1008 14:33:00.346978 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"690587e93a403cfefc9b225df6e738e9c67a31458ed49cde23474722023af49a"} Oct 08 14:33:00 crc kubenswrapper[4624]: I1008 14:33:00.347003 4624 scope.go:117] "RemoveContainer" containerID="d91f9aebe012c6850541ed64b094784a1cd737f36d0fb5aa87455cec8f6bdb54" Oct 08 14:33:19 crc kubenswrapper[4624]: I1008 14:33:19.870770 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-6b7n8"] Oct 08 14:33:19 crc kubenswrapper[4624]: E1008 14:33:19.871499 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c439ba4c-3583-4d35-b586-c97f525345a6" containerName="registry" Oct 08 14:33:19 crc kubenswrapper[4624]: I1008 14:33:19.871515 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c439ba4c-3583-4d35-b586-c97f525345a6" containerName="registry" Oct 08 14:33:19 crc kubenswrapper[4624]: I1008 14:33:19.871622 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c439ba4c-3583-4d35-b586-c97f525345a6" containerName="registry" Oct 08 14:33:19 crc kubenswrapper[4624]: I1008 14:33:19.872065 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-6b7n8" Oct 08 14:33:19 crc kubenswrapper[4624]: I1008 14:33:19.879022 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 08 14:33:19 crc kubenswrapper[4624]: I1008 14:33:19.879293 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 08 14:33:19 crc kubenswrapper[4624]: I1008 14:33:19.883848 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-nh2ht"] Oct 08 14:33:19 crc kubenswrapper[4624]: I1008 14:33:19.884655 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-nh2ht" Oct 08 14:33:19 crc kubenswrapper[4624]: I1008 14:33:19.888159 4624 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-5jpvk" Oct 08 14:33:19 crc kubenswrapper[4624]: I1008 14:33:19.891859 4624 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-b76gl" Oct 08 14:33:19 crc kubenswrapper[4624]: I1008 14:33:19.893077 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-6b7n8"] Oct 08 14:33:19 crc kubenswrapper[4624]: I1008 14:33:19.910316 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-nh2ht"] Oct 08 14:33:19 crc kubenswrapper[4624]: I1008 14:33:19.927333 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-pxvg9"] Oct 08 14:33:19 crc kubenswrapper[4624]: I1008 14:33:19.928094 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-pxvg9" Oct 08 14:33:19 crc kubenswrapper[4624]: I1008 14:33:19.930477 4624 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-fkd7q" Oct 08 14:33:19 crc kubenswrapper[4624]: I1008 14:33:19.951031 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-pxvg9"] Oct 08 14:33:20 crc kubenswrapper[4624]: I1008 14:33:20.015581 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6rq6\" (UniqueName: \"kubernetes.io/projected/973fdfa2-38af-4052-9fe8-d9657c1be807-kube-api-access-g6rq6\") pod \"cert-manager-webhook-5655c58dd6-pxvg9\" (UID: \"973fdfa2-38af-4052-9fe8-d9657c1be807\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-pxvg9" Oct 08 14:33:20 crc kubenswrapper[4624]: I1008 14:33:20.015673 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xztnn\" (UniqueName: \"kubernetes.io/projected/1f6ac2d9-a161-44b3-9e1d-8ba6eca7fa85-kube-api-access-xztnn\") pod \"cert-manager-5b446d88c5-nh2ht\" (UID: \"1f6ac2d9-a161-44b3-9e1d-8ba6eca7fa85\") " pod="cert-manager/cert-manager-5b446d88c5-nh2ht" Oct 08 14:33:20 crc kubenswrapper[4624]: I1008 14:33:20.015718 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnbcd\" (UniqueName: \"kubernetes.io/projected/733c04ac-a5c1-44e6-8314-800c327491f9-kube-api-access-gnbcd\") pod \"cert-manager-cainjector-7f985d654d-6b7n8\" (UID: \"733c04ac-a5c1-44e6-8314-800c327491f9\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-6b7n8" Oct 08 14:33:20 crc kubenswrapper[4624]: I1008 14:33:20.117016 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xztnn\" (UniqueName: \"kubernetes.io/projected/1f6ac2d9-a161-44b3-9e1d-8ba6eca7fa85-kube-api-access-xztnn\") pod \"cert-manager-5b446d88c5-nh2ht\" (UID: \"1f6ac2d9-a161-44b3-9e1d-8ba6eca7fa85\") " pod="cert-manager/cert-manager-5b446d88c5-nh2ht" Oct 08 14:33:20 crc kubenswrapper[4624]: I1008 14:33:20.117071 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnbcd\" (UniqueName: \"kubernetes.io/projected/733c04ac-a5c1-44e6-8314-800c327491f9-kube-api-access-gnbcd\") pod \"cert-manager-cainjector-7f985d654d-6b7n8\" (UID: \"733c04ac-a5c1-44e6-8314-800c327491f9\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-6b7n8" Oct 08 14:33:20 crc kubenswrapper[4624]: I1008 14:33:20.117124 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6rq6\" (UniqueName: \"kubernetes.io/projected/973fdfa2-38af-4052-9fe8-d9657c1be807-kube-api-access-g6rq6\") pod \"cert-manager-webhook-5655c58dd6-pxvg9\" (UID: \"973fdfa2-38af-4052-9fe8-d9657c1be807\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-pxvg9" Oct 08 14:33:20 crc kubenswrapper[4624]: I1008 14:33:20.134620 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xztnn\" (UniqueName: \"kubernetes.io/projected/1f6ac2d9-a161-44b3-9e1d-8ba6eca7fa85-kube-api-access-xztnn\") pod \"cert-manager-5b446d88c5-nh2ht\" (UID: \"1f6ac2d9-a161-44b3-9e1d-8ba6eca7fa85\") " pod="cert-manager/cert-manager-5b446d88c5-nh2ht" Oct 08 14:33:20 crc kubenswrapper[4624]: I1008 14:33:20.136000 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnbcd\" (UniqueName: \"kubernetes.io/projected/733c04ac-a5c1-44e6-8314-800c327491f9-kube-api-access-gnbcd\") pod \"cert-manager-cainjector-7f985d654d-6b7n8\" (UID: \"733c04ac-a5c1-44e6-8314-800c327491f9\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-6b7n8" Oct 08 14:33:20 crc kubenswrapper[4624]: I1008 14:33:20.137853 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6rq6\" (UniqueName: \"kubernetes.io/projected/973fdfa2-38af-4052-9fe8-d9657c1be807-kube-api-access-g6rq6\") pod \"cert-manager-webhook-5655c58dd6-pxvg9\" (UID: \"973fdfa2-38af-4052-9fe8-d9657c1be807\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-pxvg9" Oct 08 14:33:20 crc kubenswrapper[4624]: I1008 14:33:20.190737 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-6b7n8" Oct 08 14:33:20 crc kubenswrapper[4624]: I1008 14:33:20.201456 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-nh2ht" Oct 08 14:33:20 crc kubenswrapper[4624]: I1008 14:33:20.242120 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-pxvg9" Oct 08 14:33:20 crc kubenswrapper[4624]: I1008 14:33:20.416839 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-nh2ht"] Oct 08 14:33:20 crc kubenswrapper[4624]: I1008 14:33:20.426659 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 14:33:20 crc kubenswrapper[4624]: I1008 14:33:20.453908 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-nh2ht" event={"ID":"1f6ac2d9-a161-44b3-9e1d-8ba6eca7fa85","Type":"ContainerStarted","Data":"a4584d13e42c0f98773e54b24cd2e5295f5f0a999a92c90bafd2ae86f92f14bb"} Oct 08 14:33:20 crc kubenswrapper[4624]: I1008 14:33:20.529461 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-pxvg9"] Oct 08 14:33:20 crc kubenswrapper[4624]: W1008 14:33:20.533536 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod973fdfa2_38af_4052_9fe8_d9657c1be807.slice/crio-80cbb75c5b9a3ecc6280780190a46595c1f89155b8683d15a167dd4b9ba275b6 WatchSource:0}: Error finding container 80cbb75c5b9a3ecc6280780190a46595c1f89155b8683d15a167dd4b9ba275b6: Status 404 returned error can't find the container with id 80cbb75c5b9a3ecc6280780190a46595c1f89155b8683d15a167dd4b9ba275b6 Oct 08 14:33:20 crc kubenswrapper[4624]: I1008 14:33:20.654831 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-6b7n8"] Oct 08 14:33:20 crc kubenswrapper[4624]: W1008 14:33:20.661617 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod733c04ac_a5c1_44e6_8314_800c327491f9.slice/crio-322e63983349d75fb52f4dc0142e94fad29955d233814f1d297a7bd1456c6c2e WatchSource:0}: Error finding container 322e63983349d75fb52f4dc0142e94fad29955d233814f1d297a7bd1456c6c2e: Status 404 returned error can't find the container with id 322e63983349d75fb52f4dc0142e94fad29955d233814f1d297a7bd1456c6c2e Oct 08 14:33:21 crc kubenswrapper[4624]: I1008 14:33:21.459722 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-pxvg9" event={"ID":"973fdfa2-38af-4052-9fe8-d9657c1be807","Type":"ContainerStarted","Data":"80cbb75c5b9a3ecc6280780190a46595c1f89155b8683d15a167dd4b9ba275b6"} Oct 08 14:33:21 crc kubenswrapper[4624]: I1008 14:33:21.461550 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-6b7n8" event={"ID":"733c04ac-a5c1-44e6-8314-800c327491f9","Type":"ContainerStarted","Data":"322e63983349d75fb52f4dc0142e94fad29955d233814f1d297a7bd1456c6c2e"} Oct 08 14:33:24 crc kubenswrapper[4624]: I1008 14:33:24.476363 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-nh2ht" event={"ID":"1f6ac2d9-a161-44b3-9e1d-8ba6eca7fa85","Type":"ContainerStarted","Data":"3f1137cbf092a5f95491a5bc4e430fb2b9071a980510a28230542007039f0920"} Oct 08 14:33:24 crc kubenswrapper[4624]: I1008 14:33:24.483184 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-6b7n8" event={"ID":"733c04ac-a5c1-44e6-8314-800c327491f9","Type":"ContainerStarted","Data":"01bcb61b841544cd896fd06cd8b4b4f406996e46d3b97bd2cd16463c4f09baed"} Oct 08 14:33:24 crc kubenswrapper[4624]: I1008 14:33:24.484451 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-pxvg9" event={"ID":"973fdfa2-38af-4052-9fe8-d9657c1be807","Type":"ContainerStarted","Data":"cecf0040fc6546c0fccc654d79eedfed15ee4289ac9bf4905c529fbf14d2cacb"} Oct 08 14:33:24 crc kubenswrapper[4624]: I1008 14:33:24.484609 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-pxvg9" Oct 08 14:33:24 crc kubenswrapper[4624]: I1008 14:33:24.495195 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-nh2ht" podStartSLOduration=2.031189674 podStartE2EDuration="5.495174744s" podCreationTimestamp="2025-10-08 14:33:19 +0000 UTC" firstStartedPulling="2025-10-08 14:33:20.426327498 +0000 UTC m=+625.577262575" lastFinishedPulling="2025-10-08 14:33:23.890312568 +0000 UTC m=+629.041247645" observedRunningTime="2025-10-08 14:33:24.49188674 +0000 UTC m=+629.642821827" watchObservedRunningTime="2025-10-08 14:33:24.495174744 +0000 UTC m=+629.646109821" Oct 08 14:33:24 crc kubenswrapper[4624]: I1008 14:33:24.528488 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-6b7n8" podStartSLOduration=2.382306907 podStartE2EDuration="5.528465095s" podCreationTimestamp="2025-10-08 14:33:19 +0000 UTC" firstStartedPulling="2025-10-08 14:33:20.665374089 +0000 UTC m=+625.816309166" lastFinishedPulling="2025-10-08 14:33:23.811532277 +0000 UTC m=+628.962467354" observedRunningTime="2025-10-08 14:33:24.511716582 +0000 UTC m=+629.662651679" watchObservedRunningTime="2025-10-08 14:33:24.528465095 +0000 UTC m=+629.679400172" Oct 08 14:33:24 crc kubenswrapper[4624]: I1008 14:33:24.530120 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-pxvg9" podStartSLOduration=2.247734386 podStartE2EDuration="5.530112917s" podCreationTimestamp="2025-10-08 14:33:19 +0000 UTC" firstStartedPulling="2025-10-08 14:33:20.535594279 +0000 UTC m=+625.686529356" lastFinishedPulling="2025-10-08 14:33:23.81797281 +0000 UTC m=+628.968907887" observedRunningTime="2025-10-08 14:33:24.527058519 +0000 UTC m=+629.677993596" watchObservedRunningTime="2025-10-08 14:33:24.530112917 +0000 UTC m=+629.681047994" Oct 08 14:33:29 crc kubenswrapper[4624]: I1008 14:33:29.979704 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jbsj6"] Oct 08 14:33:29 crc kubenswrapper[4624]: I1008 14:33:29.980650 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovn-controller" containerID="cri-o://825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b" gracePeriod=30 Oct 08 14:33:29 crc kubenswrapper[4624]: I1008 14:33:29.980971 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="sbdb" containerID="cri-o://1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c" gracePeriod=30 Oct 08 14:33:29 crc kubenswrapper[4624]: I1008 14:33:29.981005 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="nbdb" containerID="cri-o://5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e" gracePeriod=30 Oct 08 14:33:29 crc kubenswrapper[4624]: I1008 14:33:29.981037 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="northd" containerID="cri-o://697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33" gracePeriod=30 Oct 08 14:33:29 crc kubenswrapper[4624]: I1008 14:33:29.981063 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b" gracePeriod=30 Oct 08 14:33:29 crc kubenswrapper[4624]: I1008 14:33:29.981091 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="kube-rbac-proxy-node" containerID="cri-o://fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f" gracePeriod=30 Oct 08 14:33:29 crc kubenswrapper[4624]: I1008 14:33:29.981133 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovn-acl-logging" containerID="cri-o://1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec" gracePeriod=30 Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.021727 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovnkube-controller" containerID="cri-o://acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415" gracePeriod=30 Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.245225 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-pxvg9" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.323090 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovnkube-controller/3.log" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.325520 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovn-acl-logging/0.log" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.326042 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovn-controller/0.log" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.326444 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381178 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-frhrc"] Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.381540 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovnkube-controller" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381555 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovnkube-controller" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.381563 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="kubecfg-setup" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381570 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="kubecfg-setup" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.381582 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovnkube-controller" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381588 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovnkube-controller" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.381594 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovnkube-controller" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381599 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovnkube-controller" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.381606 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="kube-rbac-proxy-ovn-metrics" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381611 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="kube-rbac-proxy-ovn-metrics" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.381621 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovn-controller" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381627 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovn-controller" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.381652 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovn-acl-logging" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381658 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovn-acl-logging" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.381668 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="sbdb" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381674 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="sbdb" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.381681 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="nbdb" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381686 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="nbdb" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.381696 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="northd" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381701 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="northd" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.381708 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="kube-rbac-proxy-node" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381714 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="kube-rbac-proxy-node" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381905 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovn-controller" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381945 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="nbdb" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381952 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovnkube-controller" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381960 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="sbdb" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381966 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="kube-rbac-proxy-ovn-metrics" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381973 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovnkube-controller" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381980 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovn-acl-logging" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381987 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovnkube-controller" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.381993 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="kube-rbac-proxy-node" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.382022 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovnkube-controller" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.382031 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="northd" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.382155 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovnkube-controller" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.382378 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovnkube-controller" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.382387 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovnkube-controller" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.382393 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovnkube-controller" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.382654 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerName="ovnkube-controller" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.384522 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.447701 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-run-netns\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.447766 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aad1abb7-073f-4157-b39f-ddc71fbab31d-ovn-node-metrics-cert\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.447790 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-run-systemd\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.447811 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-etc-openvswitch\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.447847 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-run-ovn-kubernetes\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.447860 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.447875 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-node-log\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.447934 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-node-log" (OuterVolumeSpecName: "node-log") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.447939 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aad1abb7-073f-4157-b39f-ddc71fbab31d-ovnkube-script-lib\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448010 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpbcq\" (UniqueName: \"kubernetes.io/projected/aad1abb7-073f-4157-b39f-ddc71fbab31d-kube-api-access-kpbcq\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448082 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-log-socket\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448105 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-cni-bin\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448128 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-run-openvswitch\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448149 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448160 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aad1abb7-073f-4157-b39f-ddc71fbab31d-env-overrides\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448200 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-systemd-units\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448289 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-run-ovn\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448314 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-var-lib-openvswitch\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448342 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448377 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-cni-netd\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448404 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aad1abb7-073f-4157-b39f-ddc71fbab31d-ovnkube-config\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448424 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-kubelet\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448449 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-slash\") pod \"aad1abb7-073f-4157-b39f-ddc71fbab31d\" (UID: \"aad1abb7-073f-4157-b39f-ddc71fbab31d\") " Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448582 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448623 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-slash\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448683 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-node-log\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448703 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-cni-bin\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448735 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fxmw\" (UniqueName: \"kubernetes.io/projected/95470b08-191c-4a39-bfd8-1f77cee19d4d-kube-api-access-7fxmw\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448759 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-kubelet\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448783 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-systemd-units\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448811 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-etc-openvswitch\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448850 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95470b08-191c-4a39-bfd8-1f77cee19d4d-ovnkube-config\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448869 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-run-netns\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448697 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448886 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-log-socket\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448722 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-log-socket" (OuterVolumeSpecName: "log-socket") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448739 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448714 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448758 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448778 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-slash" (OuterVolumeSpecName: "host-slash") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448792 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448797 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aad1abb7-073f-4157-b39f-ddc71fbab31d-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448808 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448820 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448838 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.448854 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449008 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95470b08-191c-4a39-bfd8-1f77cee19d4d-ovn-node-metrics-cert\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449039 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-cni-netd\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449101 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/95470b08-191c-4a39-bfd8-1f77cee19d4d-ovnkube-script-lib\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449132 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aad1abb7-073f-4157-b39f-ddc71fbab31d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449150 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-run-openvswitch\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449228 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aad1abb7-073f-4157-b39f-ddc71fbab31d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449261 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-run-systemd\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449303 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95470b08-191c-4a39-bfd8-1f77cee19d4d-env-overrides\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449378 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-run-ovn\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449405 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-var-lib-openvswitch\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449422 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-run-ovn-kubernetes\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449497 4624 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449509 4624 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449518 4624 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aad1abb7-073f-4157-b39f-ddc71fbab31d-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449526 4624 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449534 4624 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-slash\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449544 4624 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449552 4624 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449561 4624 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449570 4624 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-node-log\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449578 4624 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aad1abb7-073f-4157-b39f-ddc71fbab31d-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449587 4624 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-log-socket\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449595 4624 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449603 4624 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449611 4624 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aad1abb7-073f-4157-b39f-ddc71fbab31d-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449620 4624 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449652 4624 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.449663 4624 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.452763 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad1abb7-073f-4157-b39f-ddc71fbab31d-kube-api-access-kpbcq" (OuterVolumeSpecName: "kube-api-access-kpbcq") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "kube-api-access-kpbcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.452970 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad1abb7-073f-4157-b39f-ddc71fbab31d-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.460421 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "aad1abb7-073f-4157-b39f-ddc71fbab31d" (UID: "aad1abb7-073f-4157-b39f-ddc71fbab31d"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.511902 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-47hzf_48aee8dd-6063-4d3c-b65a-f37ce1ccdb82/kube-multus/2.log" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.512595 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-47hzf_48aee8dd-6063-4d3c-b65a-f37ce1ccdb82/kube-multus/1.log" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.512676 4624 generic.go:334] "Generic (PLEG): container finished" podID="48aee8dd-6063-4d3c-b65a-f37ce1ccdb82" containerID="f14fc924f93f3772d58a3e79005fffb23253439f492c41baf67b02ee9d67f50b" exitCode=2 Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.512747 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-47hzf" event={"ID":"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82","Type":"ContainerDied","Data":"f14fc924f93f3772d58a3e79005fffb23253439f492c41baf67b02ee9d67f50b"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.512822 4624 scope.go:117] "RemoveContainer" containerID="1f3de1c5bf27d90b3b9f4560a59ec4a6153df4d625d85bda278779415ae96343" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.513259 4624 scope.go:117] "RemoveContainer" containerID="f14fc924f93f3772d58a3e79005fffb23253439f492c41baf67b02ee9d67f50b" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.513441 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-47hzf_openshift-multus(48aee8dd-6063-4d3c-b65a-f37ce1ccdb82)\"" pod="openshift-multus/multus-47hzf" podUID="48aee8dd-6063-4d3c-b65a-f37ce1ccdb82" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.516132 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovnkube-controller/3.log" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.519157 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovn-acl-logging/0.log" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.519712 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbsj6_aad1abb7-073f-4157-b39f-ddc71fbab31d/ovn-controller/0.log" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520079 4624 generic.go:334] "Generic (PLEG): container finished" podID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerID="acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415" exitCode=0 Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520103 4624 generic.go:334] "Generic (PLEG): container finished" podID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerID="1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c" exitCode=0 Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520112 4624 generic.go:334] "Generic (PLEG): container finished" podID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerID="5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e" exitCode=0 Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520128 4624 generic.go:334] "Generic (PLEG): container finished" podID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerID="697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33" exitCode=0 Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520135 4624 generic.go:334] "Generic (PLEG): container finished" podID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerID="86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b" exitCode=0 Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520141 4624 generic.go:334] "Generic (PLEG): container finished" podID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerID="fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f" exitCode=0 Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520148 4624 generic.go:334] "Generic (PLEG): container finished" podID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerID="1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec" exitCode=143 Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520155 4624 generic.go:334] "Generic (PLEG): container finished" podID="aad1abb7-073f-4157-b39f-ddc71fbab31d" containerID="825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b" exitCode=143 Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520117 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerDied","Data":"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520239 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerDied","Data":"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520248 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520255 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerDied","Data":"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520351 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerDied","Data":"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520362 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerDied","Data":"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520371 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerDied","Data":"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520382 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520391 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520397 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520402 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520407 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520412 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520418 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520472 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520478 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520502 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520511 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerDied","Data":"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520520 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520526 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520532 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520537 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520542 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520547 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520552 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520558 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520563 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520569 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520577 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerDied","Data":"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520585 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520592 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520598 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520604 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520609 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520614 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520619 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520624 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520630 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520644 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520653 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbsj6" event={"ID":"aad1abb7-073f-4157-b39f-ddc71fbab31d","Type":"ContainerDied","Data":"95472a9143a71d686450277d854da6da7cf6aaae7d0fb9f91dfc4c61728bf5d2"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520661 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520667 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520673 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520678 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520684 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520689 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520694 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520699 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520704 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.520709 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d"} Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.550833 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-systemd-units\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.550871 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-etc-openvswitch\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.551153 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-systemd-units\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.551184 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-etc-openvswitch\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.551385 4624 scope.go:117] "RemoveContainer" containerID="acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.551750 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95470b08-191c-4a39-bfd8-1f77cee19d4d-ovnkube-config\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.551997 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-run-netns\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.552425 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-run-netns\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.552598 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-log-socket\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.552681 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-log-socket\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.553271 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95470b08-191c-4a39-bfd8-1f77cee19d4d-ovnkube-config\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.553291 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95470b08-191c-4a39-bfd8-1f77cee19d4d-ovn-node-metrics-cert\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.559991 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-cni-netd\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.561109 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-cni-netd\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.561260 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/95470b08-191c-4a39-bfd8-1f77cee19d4d-ovnkube-script-lib\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.562162 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-run-openvswitch\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.562218 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-run-openvswitch\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.562326 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-run-systemd\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.562360 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95470b08-191c-4a39-bfd8-1f77cee19d4d-env-overrides\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.562523 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-run-ovn\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.562688 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-var-lib-openvswitch\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.562864 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-run-ovn-kubernetes\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.562931 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-var-lib-openvswitch\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.563058 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-run-ovn-kubernetes\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.563051 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.563221 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-run-systemd\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.563222 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-slash\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.563475 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-run-ovn\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.563524 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-node-log\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.563564 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-cni-bin\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.563713 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fxmw\" (UniqueName: \"kubernetes.io/projected/95470b08-191c-4a39-bfd8-1f77cee19d4d-kube-api-access-7fxmw\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.563969 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-kubelet\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.563852 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/95470b08-191c-4a39-bfd8-1f77cee19d4d-ovnkube-script-lib\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.564166 4624 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aad1abb7-073f-4157-b39f-ddc71fbab31d-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.564259 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.564280 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-slash\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.564300 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-node-log\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.564313 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-kubelet\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.564332 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95470b08-191c-4a39-bfd8-1f77cee19d4d-host-cni-bin\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.564346 4624 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aad1abb7-073f-4157-b39f-ddc71fbab31d-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.564367 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpbcq\" (UniqueName: \"kubernetes.io/projected/aad1abb7-073f-4157-b39f-ddc71fbab31d-kube-api-access-kpbcq\") on node \"crc\" DevicePath \"\"" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.565009 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95470b08-191c-4a39-bfd8-1f77cee19d4d-env-overrides\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.566664 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95470b08-191c-4a39-bfd8-1f77cee19d4d-ovn-node-metrics-cert\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.568920 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jbsj6"] Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.572786 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jbsj6"] Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.577298 4624 scope.go:117] "RemoveContainer" containerID="1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.582715 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fxmw\" (UniqueName: \"kubernetes.io/projected/95470b08-191c-4a39-bfd8-1f77cee19d4d-kube-api-access-7fxmw\") pod \"ovnkube-node-frhrc\" (UID: \"95470b08-191c-4a39-bfd8-1f77cee19d4d\") " pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.594052 4624 scope.go:117] "RemoveContainer" containerID="1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.607425 4624 scope.go:117] "RemoveContainer" containerID="5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.634624 4624 scope.go:117] "RemoveContainer" containerID="697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.646966 4624 scope.go:117] "RemoveContainer" containerID="86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.659006 4624 scope.go:117] "RemoveContainer" containerID="fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.672390 4624 scope.go:117] "RemoveContainer" containerID="1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.685656 4624 scope.go:117] "RemoveContainer" containerID="825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.701142 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.701316 4624 scope.go:117] "RemoveContainer" containerID="d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.717199 4624 scope.go:117] "RemoveContainer" containerID="acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.717665 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415\": container with ID starting with acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415 not found: ID does not exist" containerID="acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.717725 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415"} err="failed to get container status \"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415\": rpc error: code = NotFound desc = could not find container \"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415\": container with ID starting with acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415 not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.717772 4624 scope.go:117] "RemoveContainer" containerID="1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.718124 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c\": container with ID starting with 1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c not found: ID does not exist" containerID="1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.718168 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c"} err="failed to get container status \"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c\": rpc error: code = NotFound desc = could not find container \"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c\": container with ID starting with 1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.718197 4624 scope.go:117] "RemoveContainer" containerID="1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.718614 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\": container with ID starting with 1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c not found: ID does not exist" containerID="1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.718665 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c"} err="failed to get container status \"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\": rpc error: code = NotFound desc = could not find container \"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\": container with ID starting with 1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.718681 4624 scope.go:117] "RemoveContainer" containerID="5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.718990 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\": container with ID starting with 5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e not found: ID does not exist" containerID="5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.719016 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e"} err="failed to get container status \"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\": rpc error: code = NotFound desc = could not find container \"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\": container with ID starting with 5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.719033 4624 scope.go:117] "RemoveContainer" containerID="697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.719311 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\": container with ID starting with 697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33 not found: ID does not exist" containerID="697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.719339 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33"} err="failed to get container status \"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\": rpc error: code = NotFound desc = could not find container \"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\": container with ID starting with 697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33 not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.719355 4624 scope.go:117] "RemoveContainer" containerID="86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.719700 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\": container with ID starting with 86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b not found: ID does not exist" containerID="86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.719729 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b"} err="failed to get container status \"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\": rpc error: code = NotFound desc = could not find container \"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\": container with ID starting with 86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.719743 4624 scope.go:117] "RemoveContainer" containerID="fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.720040 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\": container with ID starting with fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f not found: ID does not exist" containerID="fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.720067 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f"} err="failed to get container status \"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\": rpc error: code = NotFound desc = could not find container \"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\": container with ID starting with fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.720082 4624 scope.go:117] "RemoveContainer" containerID="1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.720415 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\": container with ID starting with 1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec not found: ID does not exist" containerID="1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.720441 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec"} err="failed to get container status \"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\": rpc error: code = NotFound desc = could not find container \"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\": container with ID starting with 1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.720454 4624 scope.go:117] "RemoveContainer" containerID="825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.720705 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\": container with ID starting with 825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b not found: ID does not exist" containerID="825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.720733 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b"} err="failed to get container status \"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\": rpc error: code = NotFound desc = could not find container \"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\": container with ID starting with 825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.720751 4624 scope.go:117] "RemoveContainer" containerID="d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d" Oct 08 14:33:30 crc kubenswrapper[4624]: E1008 14:33:30.721059 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\": container with ID starting with d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d not found: ID does not exist" containerID="d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.721078 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d"} err="failed to get container status \"d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\": rpc error: code = NotFound desc = could not find container \"d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\": container with ID starting with d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.721091 4624 scope.go:117] "RemoveContainer" containerID="acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.721476 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415"} err="failed to get container status \"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415\": rpc error: code = NotFound desc = could not find container \"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415\": container with ID starting with acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415 not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.721523 4624 scope.go:117] "RemoveContainer" containerID="1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.721813 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c"} err="failed to get container status \"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c\": rpc error: code = NotFound desc = could not find container \"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c\": container with ID starting with 1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.721835 4624 scope.go:117] "RemoveContainer" containerID="1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.722068 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c"} err="failed to get container status \"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\": rpc error: code = NotFound desc = could not find container \"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\": container with ID starting with 1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.722086 4624 scope.go:117] "RemoveContainer" containerID="5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.722409 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e"} err="failed to get container status \"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\": rpc error: code = NotFound desc = could not find container \"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\": container with ID starting with 5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.722467 4624 scope.go:117] "RemoveContainer" containerID="697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.722752 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33"} err="failed to get container status \"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\": rpc error: code = NotFound desc = could not find container \"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\": container with ID starting with 697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33 not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.722769 4624 scope.go:117] "RemoveContainer" containerID="86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.723048 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b"} err="failed to get container status \"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\": rpc error: code = NotFound desc = could not find container \"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\": container with ID starting with 86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.723076 4624 scope.go:117] "RemoveContainer" containerID="fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.723326 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f"} err="failed to get container status \"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\": rpc error: code = NotFound desc = could not find container \"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\": container with ID starting with fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.723352 4624 scope.go:117] "RemoveContainer" containerID="1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.723560 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec"} err="failed to get container status \"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\": rpc error: code = NotFound desc = could not find container \"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\": container with ID starting with 1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.723581 4624 scope.go:117] "RemoveContainer" containerID="825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.723909 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b"} err="failed to get container status \"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\": rpc error: code = NotFound desc = could not find container \"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\": container with ID starting with 825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.723929 4624 scope.go:117] "RemoveContainer" containerID="d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.724213 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d"} err="failed to get container status \"d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\": rpc error: code = NotFound desc = could not find container \"d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\": container with ID starting with d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.724238 4624 scope.go:117] "RemoveContainer" containerID="acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.724524 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415"} err="failed to get container status \"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415\": rpc error: code = NotFound desc = could not find container \"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415\": container with ID starting with acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415 not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.724543 4624 scope.go:117] "RemoveContainer" containerID="1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.724858 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c"} err="failed to get container status \"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c\": rpc error: code = NotFound desc = could not find container \"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c\": container with ID starting with 1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.724884 4624 scope.go:117] "RemoveContainer" containerID="1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.725111 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c"} err="failed to get container status \"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\": rpc error: code = NotFound desc = could not find container \"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\": container with ID starting with 1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.725126 4624 scope.go:117] "RemoveContainer" containerID="5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.725347 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e"} err="failed to get container status \"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\": rpc error: code = NotFound desc = could not find container \"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\": container with ID starting with 5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.725372 4624 scope.go:117] "RemoveContainer" containerID="697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.725629 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33"} err="failed to get container status \"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\": rpc error: code = NotFound desc = could not find container \"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\": container with ID starting with 697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33 not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.725657 4624 scope.go:117] "RemoveContainer" containerID="86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.725885 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b"} err="failed to get container status \"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\": rpc error: code = NotFound desc = could not find container \"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\": container with ID starting with 86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.725902 4624 scope.go:117] "RemoveContainer" containerID="fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.726152 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f"} err="failed to get container status \"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\": rpc error: code = NotFound desc = could not find container \"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\": container with ID starting with fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.726178 4624 scope.go:117] "RemoveContainer" containerID="1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.726597 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec"} err="failed to get container status \"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\": rpc error: code = NotFound desc = could not find container \"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\": container with ID starting with 1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.726616 4624 scope.go:117] "RemoveContainer" containerID="825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.726934 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b"} err="failed to get container status \"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\": rpc error: code = NotFound desc = could not find container \"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\": container with ID starting with 825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.726954 4624 scope.go:117] "RemoveContainer" containerID="d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.727211 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d"} err="failed to get container status \"d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\": rpc error: code = NotFound desc = could not find container \"d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\": container with ID starting with d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.727233 4624 scope.go:117] "RemoveContainer" containerID="acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.727464 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415"} err="failed to get container status \"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415\": rpc error: code = NotFound desc = could not find container \"acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415\": container with ID starting with acf21fd76887adcb97de23d11c9e4b393f5d96ed7f341edaae399929ebb80415 not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.727484 4624 scope.go:117] "RemoveContainer" containerID="1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.727770 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c"} err="failed to get container status \"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c\": rpc error: code = NotFound desc = could not find container \"1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c\": container with ID starting with 1c10f90c67075d8e210e5688a187f401ef54033d8ae51c54b498cee225d1981c not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.727790 4624 scope.go:117] "RemoveContainer" containerID="1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.728099 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c"} err="failed to get container status \"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\": rpc error: code = NotFound desc = could not find container \"1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c\": container with ID starting with 1c9f000fa9f4029dc32b94941f7f36495d4caf8471444ba77c6849c2e89fec4c not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.728123 4624 scope.go:117] "RemoveContainer" containerID="5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.728415 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e"} err="failed to get container status \"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\": rpc error: code = NotFound desc = could not find container \"5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e\": container with ID starting with 5c8d9611f63887d3f95babd71040ec56f097d789c10493eef5b0b58837fb3b2e not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.728435 4624 scope.go:117] "RemoveContainer" containerID="697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.728733 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33"} err="failed to get container status \"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\": rpc error: code = NotFound desc = could not find container \"697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33\": container with ID starting with 697e0ff3de16fa66734db5127f28af0d545df10028710f2166643f58e8be7b33 not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.728761 4624 scope.go:117] "RemoveContainer" containerID="86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b" Oct 08 14:33:30 crc kubenswrapper[4624]: W1008 14:33:30.729035 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95470b08_191c_4a39_bfd8_1f77cee19d4d.slice/crio-5a6411ac1e7d6b78f2fc570d10521891fc6b6936f8dc2713c33338c7d3aa5bbf WatchSource:0}: Error finding container 5a6411ac1e7d6b78f2fc570d10521891fc6b6936f8dc2713c33338c7d3aa5bbf: Status 404 returned error can't find the container with id 5a6411ac1e7d6b78f2fc570d10521891fc6b6936f8dc2713c33338c7d3aa5bbf Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.729267 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b"} err="failed to get container status \"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\": rpc error: code = NotFound desc = could not find container \"86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b\": container with ID starting with 86dcb89a7913ce110386bd38917ac05d28865cf048f96df35cd33f5c100a330b not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.729289 4624 scope.go:117] "RemoveContainer" containerID="fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.729612 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f"} err="failed to get container status \"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\": rpc error: code = NotFound desc = could not find container \"fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f\": container with ID starting with fb03cbb22b545ec34504e9e3590dc294621e018d46c2c8378d89d780bf58ce6f not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.729650 4624 scope.go:117] "RemoveContainer" containerID="1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.730009 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec"} err="failed to get container status \"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\": rpc error: code = NotFound desc = could not find container \"1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec\": container with ID starting with 1df03e323ceaa0323675d6399375d235cf11571445a1dba6445a33e287199dec not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.730030 4624 scope.go:117] "RemoveContainer" containerID="825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.730306 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b"} err="failed to get container status \"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\": rpc error: code = NotFound desc = could not find container \"825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b\": container with ID starting with 825f139266eb4df1096671668e962b9038627d2bbb512b644498d303874f670b not found: ID does not exist" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.730324 4624 scope.go:117] "RemoveContainer" containerID="d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d" Oct 08 14:33:30 crc kubenswrapper[4624]: I1008 14:33:30.730595 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d"} err="failed to get container status \"d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\": rpc error: code = NotFound desc = could not find container \"d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d\": container with ID starting with d5cd40302cae928ceed5348ded41754f1de5aa16a17e9d452d88746a1ae8a11d not found: ID does not exist" Oct 08 14:33:31 crc kubenswrapper[4624]: I1008 14:33:31.471893 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad1abb7-073f-4157-b39f-ddc71fbab31d" path="/var/lib/kubelet/pods/aad1abb7-073f-4157-b39f-ddc71fbab31d/volumes" Oct 08 14:33:31 crc kubenswrapper[4624]: I1008 14:33:31.533524 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-47hzf_48aee8dd-6063-4d3c-b65a-f37ce1ccdb82/kube-multus/2.log" Oct 08 14:33:31 crc kubenswrapper[4624]: I1008 14:33:31.536771 4624 generic.go:334] "Generic (PLEG): container finished" podID="95470b08-191c-4a39-bfd8-1f77cee19d4d" containerID="d23c929d4d3675ca2f9cf054bd043f914b41d216c0680661da61c70cf11a699a" exitCode=0 Oct 08 14:33:31 crc kubenswrapper[4624]: I1008 14:33:31.536814 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" event={"ID":"95470b08-191c-4a39-bfd8-1f77cee19d4d","Type":"ContainerDied","Data":"d23c929d4d3675ca2f9cf054bd043f914b41d216c0680661da61c70cf11a699a"} Oct 08 14:33:31 crc kubenswrapper[4624]: I1008 14:33:31.536842 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" event={"ID":"95470b08-191c-4a39-bfd8-1f77cee19d4d","Type":"ContainerStarted","Data":"5a6411ac1e7d6b78f2fc570d10521891fc6b6936f8dc2713c33338c7d3aa5bbf"} Oct 08 14:33:32 crc kubenswrapper[4624]: I1008 14:33:32.544412 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" event={"ID":"95470b08-191c-4a39-bfd8-1f77cee19d4d","Type":"ContainerStarted","Data":"5df3aa3718691035d99a60adc2e9f11569d1605e76c83c7add9f7c413118d333"} Oct 08 14:33:32 crc kubenswrapper[4624]: I1008 14:33:32.544748 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" event={"ID":"95470b08-191c-4a39-bfd8-1f77cee19d4d","Type":"ContainerStarted","Data":"437a0984e1f075d8b715c790c8dd70406cfa3596f33767154f2e60495c59c4ad"} Oct 08 14:33:32 crc kubenswrapper[4624]: I1008 14:33:32.544764 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" event={"ID":"95470b08-191c-4a39-bfd8-1f77cee19d4d","Type":"ContainerStarted","Data":"ea92a3e0b9dc592bf259c796cdf13098a4f32114e78a14bb916b56b402e849dd"} Oct 08 14:33:32 crc kubenswrapper[4624]: I1008 14:33:32.544773 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" event={"ID":"95470b08-191c-4a39-bfd8-1f77cee19d4d","Type":"ContainerStarted","Data":"ad5f39a72a98f5ca95ef3c82e6dcc014fd21258e302728f475f2412501c7c4a9"} Oct 08 14:33:32 crc kubenswrapper[4624]: I1008 14:33:32.544783 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" event={"ID":"95470b08-191c-4a39-bfd8-1f77cee19d4d","Type":"ContainerStarted","Data":"b968220f873317e0dcd9294a6eb73d58b91c26706dbb0d9d4451ad239b1bd75c"} Oct 08 14:33:32 crc kubenswrapper[4624]: I1008 14:33:32.544793 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" event={"ID":"95470b08-191c-4a39-bfd8-1f77cee19d4d","Type":"ContainerStarted","Data":"c16a78e5ef3c29c7fa4a3b46a856a80db1fe5bd4d1ae78432c7d7b1094a3784a"} Oct 08 14:33:34 crc kubenswrapper[4624]: I1008 14:33:34.559354 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" event={"ID":"95470b08-191c-4a39-bfd8-1f77cee19d4d","Type":"ContainerStarted","Data":"70216a935dc9ff13c85a5c2345df6d2146c20e0706b98c1f96e6767369672ea9"} Oct 08 14:33:37 crc kubenswrapper[4624]: I1008 14:33:37.578850 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" event={"ID":"95470b08-191c-4a39-bfd8-1f77cee19d4d","Type":"ContainerStarted","Data":"6591a0aec27a90513df54950c2f8a8f3fe2d0b8b50bedd411912d8a52d1ea9c8"} Oct 08 14:33:37 crc kubenswrapper[4624]: I1008 14:33:37.579200 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:37 crc kubenswrapper[4624]: I1008 14:33:37.579232 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:37 crc kubenswrapper[4624]: I1008 14:33:37.611761 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:37 crc kubenswrapper[4624]: I1008 14:33:37.676018 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" podStartSLOduration=7.676000952 podStartE2EDuration="7.676000952s" podCreationTimestamp="2025-10-08 14:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:33:37.629673891 +0000 UTC m=+642.780608978" watchObservedRunningTime="2025-10-08 14:33:37.676000952 +0000 UTC m=+642.826936019" Oct 08 14:33:38 crc kubenswrapper[4624]: I1008 14:33:38.583174 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:38 crc kubenswrapper[4624]: I1008 14:33:38.606053 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:33:43 crc kubenswrapper[4624]: I1008 14:33:43.465096 4624 scope.go:117] "RemoveContainer" containerID="f14fc924f93f3772d58a3e79005fffb23253439f492c41baf67b02ee9d67f50b" Oct 08 14:33:43 crc kubenswrapper[4624]: E1008 14:33:43.465582 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-47hzf_openshift-multus(48aee8dd-6063-4d3c-b65a-f37ce1ccdb82)\"" pod="openshift-multus/multus-47hzf" podUID="48aee8dd-6063-4d3c-b65a-f37ce1ccdb82" Oct 08 14:33:57 crc kubenswrapper[4624]: I1008 14:33:57.465959 4624 scope.go:117] "RemoveContainer" containerID="f14fc924f93f3772d58a3e79005fffb23253439f492c41baf67b02ee9d67f50b" Oct 08 14:33:57 crc kubenswrapper[4624]: I1008 14:33:57.677478 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-47hzf_48aee8dd-6063-4d3c-b65a-f37ce1ccdb82/kube-multus/2.log" Oct 08 14:33:57 crc kubenswrapper[4624]: I1008 14:33:57.677530 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-47hzf" event={"ID":"48aee8dd-6063-4d3c-b65a-f37ce1ccdb82","Type":"ContainerStarted","Data":"4bb41bc7e4e5f33d74bed5f6247b78f08dca366996bb864208dc4329002b4ca0"} Oct 08 14:34:00 crc kubenswrapper[4624]: I1008 14:34:00.722976 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-frhrc" Oct 08 14:34:12 crc kubenswrapper[4624]: I1008 14:34:12.629162 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb"] Oct 08 14:34:12 crc kubenswrapper[4624]: I1008 14:34:12.631072 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb" Oct 08 14:34:12 crc kubenswrapper[4624]: I1008 14:34:12.631465 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7412e80b-7f98-4f46-8223-2389077e9175-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb\" (UID: \"7412e80b-7f98-4f46-8223-2389077e9175\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb" Oct 08 14:34:12 crc kubenswrapper[4624]: I1008 14:34:12.631557 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7412e80b-7f98-4f46-8223-2389077e9175-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb\" (UID: \"7412e80b-7f98-4f46-8223-2389077e9175\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb" Oct 08 14:34:12 crc kubenswrapper[4624]: I1008 14:34:12.631593 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfqls\" (UniqueName: \"kubernetes.io/projected/7412e80b-7f98-4f46-8223-2389077e9175-kube-api-access-rfqls\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb\" (UID: \"7412e80b-7f98-4f46-8223-2389077e9175\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb" Oct 08 14:34:12 crc kubenswrapper[4624]: I1008 14:34:12.632732 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 08 14:34:12 crc kubenswrapper[4624]: I1008 14:34:12.640983 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb"] Oct 08 14:34:12 crc kubenswrapper[4624]: I1008 14:34:12.732846 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7412e80b-7f98-4f46-8223-2389077e9175-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb\" (UID: \"7412e80b-7f98-4f46-8223-2389077e9175\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb" Oct 08 14:34:12 crc kubenswrapper[4624]: I1008 14:34:12.732913 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfqls\" (UniqueName: \"kubernetes.io/projected/7412e80b-7f98-4f46-8223-2389077e9175-kube-api-access-rfqls\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb\" (UID: \"7412e80b-7f98-4f46-8223-2389077e9175\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb" Oct 08 14:34:12 crc kubenswrapper[4624]: I1008 14:34:12.732970 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7412e80b-7f98-4f46-8223-2389077e9175-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb\" (UID: \"7412e80b-7f98-4f46-8223-2389077e9175\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb" Oct 08 14:34:12 crc kubenswrapper[4624]: I1008 14:34:12.733505 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7412e80b-7f98-4f46-8223-2389077e9175-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb\" (UID: \"7412e80b-7f98-4f46-8223-2389077e9175\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb" Oct 08 14:34:12 crc kubenswrapper[4624]: I1008 14:34:12.734247 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7412e80b-7f98-4f46-8223-2389077e9175-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb\" (UID: \"7412e80b-7f98-4f46-8223-2389077e9175\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb" Oct 08 14:34:12 crc kubenswrapper[4624]: I1008 14:34:12.756538 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfqls\" (UniqueName: \"kubernetes.io/projected/7412e80b-7f98-4f46-8223-2389077e9175-kube-api-access-rfqls\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb\" (UID: \"7412e80b-7f98-4f46-8223-2389077e9175\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb" Oct 08 14:34:12 crc kubenswrapper[4624]: I1008 14:34:12.947854 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb" Oct 08 14:34:13 crc kubenswrapper[4624]: I1008 14:34:13.126327 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb"] Oct 08 14:34:13 crc kubenswrapper[4624]: W1008 14:34:13.132359 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7412e80b_7f98_4f46_8223_2389077e9175.slice/crio-df6ba10c58872b4ea31266e29091cffaa7184efb4a04f36879f336168ed56711 WatchSource:0}: Error finding container df6ba10c58872b4ea31266e29091cffaa7184efb4a04f36879f336168ed56711: Status 404 returned error can't find the container with id df6ba10c58872b4ea31266e29091cffaa7184efb4a04f36879f336168ed56711 Oct 08 14:34:13 crc kubenswrapper[4624]: I1008 14:34:13.758044 4624 generic.go:334] "Generic (PLEG): container finished" podID="7412e80b-7f98-4f46-8223-2389077e9175" containerID="09309d9f64bbbc8d811a8c080679ae65b56ed4b94ef4caa362a30bd9c60f3013" exitCode=0 Oct 08 14:34:13 crc kubenswrapper[4624]: I1008 14:34:13.758096 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb" event={"ID":"7412e80b-7f98-4f46-8223-2389077e9175","Type":"ContainerDied","Data":"09309d9f64bbbc8d811a8c080679ae65b56ed4b94ef4caa362a30bd9c60f3013"} Oct 08 14:34:13 crc kubenswrapper[4624]: I1008 14:34:13.758122 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb" event={"ID":"7412e80b-7f98-4f46-8223-2389077e9175","Type":"ContainerStarted","Data":"df6ba10c58872b4ea31266e29091cffaa7184efb4a04f36879f336168ed56711"} Oct 08 14:34:17 crc kubenswrapper[4624]: I1008 14:34:17.784539 4624 generic.go:334] "Generic (PLEG): container finished" podID="7412e80b-7f98-4f46-8223-2389077e9175" containerID="7a84094672958a1a4a03ebc28417380bf504b3bae05c828051a9bd45fedcfedb" exitCode=0 Oct 08 14:34:17 crc kubenswrapper[4624]: I1008 14:34:17.784747 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb" event={"ID":"7412e80b-7f98-4f46-8223-2389077e9175","Type":"ContainerDied","Data":"7a84094672958a1a4a03ebc28417380bf504b3bae05c828051a9bd45fedcfedb"} Oct 08 14:34:18 crc kubenswrapper[4624]: I1008 14:34:18.791680 4624 generic.go:334] "Generic (PLEG): container finished" podID="7412e80b-7f98-4f46-8223-2389077e9175" containerID="7ac53261dad51bb32225a336e8cbdc417510d769b20aa996a2bf123599ddf07d" exitCode=0 Oct 08 14:34:18 crc kubenswrapper[4624]: I1008 14:34:18.791724 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb" event={"ID":"7412e80b-7f98-4f46-8223-2389077e9175","Type":"ContainerDied","Data":"7ac53261dad51bb32225a336e8cbdc417510d769b20aa996a2bf123599ddf07d"} Oct 08 14:34:19 crc kubenswrapper[4624]: I1008 14:34:19.989902 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb" Oct 08 14:34:20 crc kubenswrapper[4624]: I1008 14:34:20.123314 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfqls\" (UniqueName: \"kubernetes.io/projected/7412e80b-7f98-4f46-8223-2389077e9175-kube-api-access-rfqls\") pod \"7412e80b-7f98-4f46-8223-2389077e9175\" (UID: \"7412e80b-7f98-4f46-8223-2389077e9175\") " Oct 08 14:34:20 crc kubenswrapper[4624]: I1008 14:34:20.123392 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7412e80b-7f98-4f46-8223-2389077e9175-util\") pod \"7412e80b-7f98-4f46-8223-2389077e9175\" (UID: \"7412e80b-7f98-4f46-8223-2389077e9175\") " Oct 08 14:34:20 crc kubenswrapper[4624]: I1008 14:34:20.123458 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7412e80b-7f98-4f46-8223-2389077e9175-bundle\") pod \"7412e80b-7f98-4f46-8223-2389077e9175\" (UID: \"7412e80b-7f98-4f46-8223-2389077e9175\") " Oct 08 14:34:20 crc kubenswrapper[4624]: I1008 14:34:20.124497 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7412e80b-7f98-4f46-8223-2389077e9175-bundle" (OuterVolumeSpecName: "bundle") pod "7412e80b-7f98-4f46-8223-2389077e9175" (UID: "7412e80b-7f98-4f46-8223-2389077e9175"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:34:20 crc kubenswrapper[4624]: I1008 14:34:20.128840 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7412e80b-7f98-4f46-8223-2389077e9175-kube-api-access-rfqls" (OuterVolumeSpecName: "kube-api-access-rfqls") pod "7412e80b-7f98-4f46-8223-2389077e9175" (UID: "7412e80b-7f98-4f46-8223-2389077e9175"). InnerVolumeSpecName "kube-api-access-rfqls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:34:20 crc kubenswrapper[4624]: I1008 14:34:20.136301 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7412e80b-7f98-4f46-8223-2389077e9175-util" (OuterVolumeSpecName: "util") pod "7412e80b-7f98-4f46-8223-2389077e9175" (UID: "7412e80b-7f98-4f46-8223-2389077e9175"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:34:20 crc kubenswrapper[4624]: I1008 14:34:20.224689 4624 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7412e80b-7f98-4f46-8223-2389077e9175-util\") on node \"crc\" DevicePath \"\"" Oct 08 14:34:20 crc kubenswrapper[4624]: I1008 14:34:20.224725 4624 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7412e80b-7f98-4f46-8223-2389077e9175-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:34:20 crc kubenswrapper[4624]: I1008 14:34:20.224735 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfqls\" (UniqueName: \"kubernetes.io/projected/7412e80b-7f98-4f46-8223-2389077e9175-kube-api-access-rfqls\") on node \"crc\" DevicePath \"\"" Oct 08 14:34:20 crc kubenswrapper[4624]: I1008 14:34:20.802809 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb" event={"ID":"7412e80b-7f98-4f46-8223-2389077e9175","Type":"ContainerDied","Data":"df6ba10c58872b4ea31266e29091cffaa7184efb4a04f36879f336168ed56711"} Oct 08 14:34:20 crc kubenswrapper[4624]: I1008 14:34:20.802846 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df6ba10c58872b4ea31266e29091cffaa7184efb4a04f36879f336168ed56711" Oct 08 14:34:20 crc kubenswrapper[4624]: I1008 14:34:20.802902 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb" Oct 08 14:34:24 crc kubenswrapper[4624]: I1008 14:34:24.395013 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-626pz"] Oct 08 14:34:24 crc kubenswrapper[4624]: E1008 14:34:24.395730 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7412e80b-7f98-4f46-8223-2389077e9175" containerName="extract" Oct 08 14:34:24 crc kubenswrapper[4624]: I1008 14:34:24.395742 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7412e80b-7f98-4f46-8223-2389077e9175" containerName="extract" Oct 08 14:34:24 crc kubenswrapper[4624]: E1008 14:34:24.395751 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7412e80b-7f98-4f46-8223-2389077e9175" containerName="pull" Oct 08 14:34:24 crc kubenswrapper[4624]: I1008 14:34:24.395757 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7412e80b-7f98-4f46-8223-2389077e9175" containerName="pull" Oct 08 14:34:24 crc kubenswrapper[4624]: E1008 14:34:24.395777 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7412e80b-7f98-4f46-8223-2389077e9175" containerName="util" Oct 08 14:34:24 crc kubenswrapper[4624]: I1008 14:34:24.395782 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7412e80b-7f98-4f46-8223-2389077e9175" containerName="util" Oct 08 14:34:24 crc kubenswrapper[4624]: I1008 14:34:24.395866 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="7412e80b-7f98-4f46-8223-2389077e9175" containerName="extract" Oct 08 14:34:24 crc kubenswrapper[4624]: I1008 14:34:24.396206 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-626pz" Oct 08 14:34:24 crc kubenswrapper[4624]: I1008 14:34:24.398779 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 08 14:34:24 crc kubenswrapper[4624]: I1008 14:34:24.400071 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-vvv4j" Oct 08 14:34:24 crc kubenswrapper[4624]: I1008 14:34:24.403321 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 08 14:34:24 crc kubenswrapper[4624]: I1008 14:34:24.416279 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-626pz"] Oct 08 14:34:24 crc kubenswrapper[4624]: I1008 14:34:24.476137 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcjtf\" (UniqueName: \"kubernetes.io/projected/caa42ed9-a0c0-4e5b-836a-9bfcd09c439e-kube-api-access-kcjtf\") pod \"nmstate-operator-858ddd8f98-626pz\" (UID: \"caa42ed9-a0c0-4e5b-836a-9bfcd09c439e\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-626pz" Oct 08 14:34:24 crc kubenswrapper[4624]: I1008 14:34:24.577579 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcjtf\" (UniqueName: \"kubernetes.io/projected/caa42ed9-a0c0-4e5b-836a-9bfcd09c439e-kube-api-access-kcjtf\") pod \"nmstate-operator-858ddd8f98-626pz\" (UID: \"caa42ed9-a0c0-4e5b-836a-9bfcd09c439e\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-626pz" Oct 08 14:34:24 crc kubenswrapper[4624]: I1008 14:34:24.601852 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcjtf\" (UniqueName: \"kubernetes.io/projected/caa42ed9-a0c0-4e5b-836a-9bfcd09c439e-kube-api-access-kcjtf\") pod \"nmstate-operator-858ddd8f98-626pz\" (UID: \"caa42ed9-a0c0-4e5b-836a-9bfcd09c439e\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-626pz" Oct 08 14:34:24 crc kubenswrapper[4624]: I1008 14:34:24.741665 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-626pz" Oct 08 14:34:25 crc kubenswrapper[4624]: I1008 14:34:25.171434 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-626pz"] Oct 08 14:34:25 crc kubenswrapper[4624]: I1008 14:34:25.835377 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-626pz" event={"ID":"caa42ed9-a0c0-4e5b-836a-9bfcd09c439e","Type":"ContainerStarted","Data":"916b5abdfd8fdf6771dfabff680f2fba7cb3f1077f0701a854552423c3668b18"} Oct 08 14:34:29 crc kubenswrapper[4624]: I1008 14:34:29.855924 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-626pz" event={"ID":"caa42ed9-a0c0-4e5b-836a-9bfcd09c439e","Type":"ContainerStarted","Data":"53eb42c671b60bb92e051fab93aa0418670f4fb200bd3a145e042ea74348259e"} Oct 08 14:34:29 crc kubenswrapper[4624]: I1008 14:34:29.878952 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-626pz" podStartSLOduration=1.779760028 podStartE2EDuration="5.878927121s" podCreationTimestamp="2025-10-08 14:34:24 +0000 UTC" firstStartedPulling="2025-10-08 14:34:25.186783772 +0000 UTC m=+690.337718849" lastFinishedPulling="2025-10-08 14:34:29.285950865 +0000 UTC m=+694.436885942" observedRunningTime="2025-10-08 14:34:29.876339965 +0000 UTC m=+695.027275042" watchObservedRunningTime="2025-10-08 14:34:29.878927121 +0000 UTC m=+695.029862188" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.332286 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-8bkj8"] Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.334823 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-8bkj8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.337161 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-vpmbx" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.337361 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-jbwmj"] Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.337994 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-jbwmj" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.339134 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.362143 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-qfgz8"] Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.363026 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qfgz8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.388774 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e42689fb-9ba3-4ab8-8e3d-8ca52a091c49-ovs-socket\") pod \"nmstate-handler-qfgz8\" (UID: \"e42689fb-9ba3-4ab8-8e3d-8ca52a091c49\") " pod="openshift-nmstate/nmstate-handler-qfgz8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.389030 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e42689fb-9ba3-4ab8-8e3d-8ca52a091c49-nmstate-lock\") pod \"nmstate-handler-qfgz8\" (UID: \"e42689fb-9ba3-4ab8-8e3d-8ca52a091c49\") " pod="openshift-nmstate/nmstate-handler-qfgz8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.389141 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e42689fb-9ba3-4ab8-8e3d-8ca52a091c49-dbus-socket\") pod \"nmstate-handler-qfgz8\" (UID: \"e42689fb-9ba3-4ab8-8e3d-8ca52a091c49\") " pod="openshift-nmstate/nmstate-handler-qfgz8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.389256 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlvvr\" (UniqueName: \"kubernetes.io/projected/e42689fb-9ba3-4ab8-8e3d-8ca52a091c49-kube-api-access-jlvvr\") pod \"nmstate-handler-qfgz8\" (UID: \"e42689fb-9ba3-4ab8-8e3d-8ca52a091c49\") " pod="openshift-nmstate/nmstate-handler-qfgz8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.413745 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-8bkj8"] Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.431339 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-jbwmj"] Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.479626 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-tb675"] Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.480435 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-tb675" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.492242 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.492269 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-fs2lc" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.492452 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc82v\" (UniqueName: \"kubernetes.io/projected/96e0b6b6-38d2-491f-ae9a-5be4860d1da0-kube-api-access-dc82v\") pod \"nmstate-metrics-fdff9cb8d-8bkj8\" (UID: \"96e0b6b6-38d2-491f-ae9a-5be4860d1da0\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-8bkj8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.492493 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e42689fb-9ba3-4ab8-8e3d-8ca52a091c49-ovs-socket\") pod \"nmstate-handler-qfgz8\" (UID: \"e42689fb-9ba3-4ab8-8e3d-8ca52a091c49\") " pod="openshift-nmstate/nmstate-handler-qfgz8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.492543 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e42689fb-9ba3-4ab8-8e3d-8ca52a091c49-ovs-socket\") pod \"nmstate-handler-qfgz8\" (UID: \"e42689fb-9ba3-4ab8-8e3d-8ca52a091c49\") " pod="openshift-nmstate/nmstate-handler-qfgz8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.492734 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.492843 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e42689fb-9ba3-4ab8-8e3d-8ca52a091c49-nmstate-lock\") pod \"nmstate-handler-qfgz8\" (UID: \"e42689fb-9ba3-4ab8-8e3d-8ca52a091c49\") " pod="openshift-nmstate/nmstate-handler-qfgz8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.492879 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfvss\" (UniqueName: \"kubernetes.io/projected/d82b9420-16a1-4ddf-9239-7b649b9429d2-kube-api-access-hfvss\") pod \"nmstate-webhook-6cdbc54649-jbwmj\" (UID: \"d82b9420-16a1-4ddf-9239-7b649b9429d2\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-jbwmj" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.492919 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e42689fb-9ba3-4ab8-8e3d-8ca52a091c49-dbus-socket\") pod \"nmstate-handler-qfgz8\" (UID: \"e42689fb-9ba3-4ab8-8e3d-8ca52a091c49\") " pod="openshift-nmstate/nmstate-handler-qfgz8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.492970 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlvvr\" (UniqueName: \"kubernetes.io/projected/e42689fb-9ba3-4ab8-8e3d-8ca52a091c49-kube-api-access-jlvvr\") pod \"nmstate-handler-qfgz8\" (UID: \"e42689fb-9ba3-4ab8-8e3d-8ca52a091c49\") " pod="openshift-nmstate/nmstate-handler-qfgz8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.493010 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d82b9420-16a1-4ddf-9239-7b649b9429d2-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-jbwmj\" (UID: \"d82b9420-16a1-4ddf-9239-7b649b9429d2\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-jbwmj" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.493647 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e42689fb-9ba3-4ab8-8e3d-8ca52a091c49-nmstate-lock\") pod \"nmstate-handler-qfgz8\" (UID: \"e42689fb-9ba3-4ab8-8e3d-8ca52a091c49\") " pod="openshift-nmstate/nmstate-handler-qfgz8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.493780 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e42689fb-9ba3-4ab8-8e3d-8ca52a091c49-dbus-socket\") pod \"nmstate-handler-qfgz8\" (UID: \"e42689fb-9ba3-4ab8-8e3d-8ca52a091c49\") " pod="openshift-nmstate/nmstate-handler-qfgz8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.496239 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-tb675"] Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.541869 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlvvr\" (UniqueName: \"kubernetes.io/projected/e42689fb-9ba3-4ab8-8e3d-8ca52a091c49-kube-api-access-jlvvr\") pod \"nmstate-handler-qfgz8\" (UID: \"e42689fb-9ba3-4ab8-8e3d-8ca52a091c49\") " pod="openshift-nmstate/nmstate-handler-qfgz8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.599027 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cc684d95-c43f-4d51-abca-fe8d1719d548-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-tb675\" (UID: \"cc684d95-c43f-4d51-abca-fe8d1719d548\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-tb675" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.599073 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d82b9420-16a1-4ddf-9239-7b649b9429d2-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-jbwmj\" (UID: \"d82b9420-16a1-4ddf-9239-7b649b9429d2\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-jbwmj" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.599099 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc684d95-c43f-4d51-abca-fe8d1719d548-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-tb675\" (UID: \"cc684d95-c43f-4d51-abca-fe8d1719d548\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-tb675" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.599129 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc82v\" (UniqueName: \"kubernetes.io/projected/96e0b6b6-38d2-491f-ae9a-5be4860d1da0-kube-api-access-dc82v\") pod \"nmstate-metrics-fdff9cb8d-8bkj8\" (UID: \"96e0b6b6-38d2-491f-ae9a-5be4860d1da0\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-8bkj8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.599174 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfvss\" (UniqueName: \"kubernetes.io/projected/d82b9420-16a1-4ddf-9239-7b649b9429d2-kube-api-access-hfvss\") pod \"nmstate-webhook-6cdbc54649-jbwmj\" (UID: \"d82b9420-16a1-4ddf-9239-7b649b9429d2\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-jbwmj" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.599196 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l247q\" (UniqueName: \"kubernetes.io/projected/cc684d95-c43f-4d51-abca-fe8d1719d548-kube-api-access-l247q\") pod \"nmstate-console-plugin-6b874cbd85-tb675\" (UID: \"cc684d95-c43f-4d51-abca-fe8d1719d548\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-tb675" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.602296 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d82b9420-16a1-4ddf-9239-7b649b9429d2-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-jbwmj\" (UID: \"d82b9420-16a1-4ddf-9239-7b649b9429d2\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-jbwmj" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.624271 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc82v\" (UniqueName: \"kubernetes.io/projected/96e0b6b6-38d2-491f-ae9a-5be4860d1da0-kube-api-access-dc82v\") pod \"nmstate-metrics-fdff9cb8d-8bkj8\" (UID: \"96e0b6b6-38d2-491f-ae9a-5be4860d1da0\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-8bkj8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.633405 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfvss\" (UniqueName: \"kubernetes.io/projected/d82b9420-16a1-4ddf-9239-7b649b9429d2-kube-api-access-hfvss\") pod \"nmstate-webhook-6cdbc54649-jbwmj\" (UID: \"d82b9420-16a1-4ddf-9239-7b649b9429d2\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-jbwmj" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.664174 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-8bkj8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.683435 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6ff968f97f-snrwv"] Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.684457 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.685526 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-jbwmj" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.700363 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qfgz8" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.714572 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5396ecd-3671-4946-940f-ba3f7ffafde3-console-serving-cert\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.714625 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e5396ecd-3671-4946-940f-ba3f7ffafde3-console-config\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.714706 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e5396ecd-3671-4946-940f-ba3f7ffafde3-oauth-serving-cert\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.714740 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l247q\" (UniqueName: \"kubernetes.io/projected/cc684d95-c43f-4d51-abca-fe8d1719d548-kube-api-access-l247q\") pod \"nmstate-console-plugin-6b874cbd85-tb675\" (UID: \"cc684d95-c43f-4d51-abca-fe8d1719d548\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-tb675" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.714826 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e5396ecd-3671-4946-940f-ba3f7ffafde3-service-ca\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.714857 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cc684d95-c43f-4d51-abca-fe8d1719d548-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-tb675\" (UID: \"cc684d95-c43f-4d51-abca-fe8d1719d548\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-tb675" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.714906 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc684d95-c43f-4d51-abca-fe8d1719d548-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-tb675\" (UID: \"cc684d95-c43f-4d51-abca-fe8d1719d548\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-tb675" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.714946 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5396ecd-3671-4946-940f-ba3f7ffafde3-trusted-ca-bundle\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.714982 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vjmf\" (UniqueName: \"kubernetes.io/projected/e5396ecd-3671-4946-940f-ba3f7ffafde3-kube-api-access-7vjmf\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.715013 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e5396ecd-3671-4946-940f-ba3f7ffafde3-console-oauth-config\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.722090 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cc684d95-c43f-4d51-abca-fe8d1719d548-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-tb675\" (UID: \"cc684d95-c43f-4d51-abca-fe8d1719d548\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-tb675" Oct 08 14:34:33 crc kubenswrapper[4624]: E1008 14:34:33.722252 4624 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Oct 08 14:34:33 crc kubenswrapper[4624]: E1008 14:34:33.722339 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc684d95-c43f-4d51-abca-fe8d1719d548-plugin-serving-cert podName:cc684d95-c43f-4d51-abca-fe8d1719d548 nodeName:}" failed. No retries permitted until 2025-10-08 14:34:34.222314918 +0000 UTC m=+699.373249995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/cc684d95-c43f-4d51-abca-fe8d1719d548-plugin-serving-cert") pod "nmstate-console-plugin-6b874cbd85-tb675" (UID: "cc684d95-c43f-4d51-abca-fe8d1719d548") : secret "plugin-serving-cert" not found Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.766416 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6ff968f97f-snrwv"] Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.769540 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l247q\" (UniqueName: \"kubernetes.io/projected/cc684d95-c43f-4d51-abca-fe8d1719d548-kube-api-access-l247q\") pod \"nmstate-console-plugin-6b874cbd85-tb675\" (UID: \"cc684d95-c43f-4d51-abca-fe8d1719d548\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-tb675" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.815795 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e5396ecd-3671-4946-940f-ba3f7ffafde3-service-ca\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.815865 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5396ecd-3671-4946-940f-ba3f7ffafde3-trusted-ca-bundle\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.815887 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjmf\" (UniqueName: \"kubernetes.io/projected/e5396ecd-3671-4946-940f-ba3f7ffafde3-kube-api-access-7vjmf\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.815908 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e5396ecd-3671-4946-940f-ba3f7ffafde3-console-oauth-config\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.815932 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5396ecd-3671-4946-940f-ba3f7ffafde3-console-serving-cert\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.815949 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e5396ecd-3671-4946-940f-ba3f7ffafde3-console-config\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.815966 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e5396ecd-3671-4946-940f-ba3f7ffafde3-oauth-serving-cert\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.816715 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e5396ecd-3671-4946-940f-ba3f7ffafde3-oauth-serving-cert\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.820465 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e5396ecd-3671-4946-940f-ba3f7ffafde3-console-config\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.821288 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e5396ecd-3671-4946-940f-ba3f7ffafde3-service-ca\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.821996 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5396ecd-3671-4946-940f-ba3f7ffafde3-trusted-ca-bundle\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.826202 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e5396ecd-3671-4946-940f-ba3f7ffafde3-console-oauth-config\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.837783 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5396ecd-3671-4946-940f-ba3f7ffafde3-console-serving-cert\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.846008 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vjmf\" (UniqueName: \"kubernetes.io/projected/e5396ecd-3671-4946-940f-ba3f7ffafde3-kube-api-access-7vjmf\") pod \"console-6ff968f97f-snrwv\" (UID: \"e5396ecd-3671-4946-940f-ba3f7ffafde3\") " pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:33 crc kubenswrapper[4624]: I1008 14:34:33.908833 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qfgz8" event={"ID":"e42689fb-9ba3-4ab8-8e3d-8ca52a091c49","Type":"ContainerStarted","Data":"d32baa3769d5bf4a9c418a185e371d309e64308409853159a96a2ea8ec822415"} Oct 08 14:34:34 crc kubenswrapper[4624]: I1008 14:34:34.039283 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:34 crc kubenswrapper[4624]: I1008 14:34:34.084399 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-jbwmj"] Oct 08 14:34:34 crc kubenswrapper[4624]: I1008 14:34:34.224754 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc684d95-c43f-4d51-abca-fe8d1719d548-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-tb675\" (UID: \"cc684d95-c43f-4d51-abca-fe8d1719d548\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-tb675" Oct 08 14:34:34 crc kubenswrapper[4624]: I1008 14:34:34.229374 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc684d95-c43f-4d51-abca-fe8d1719d548-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-tb675\" (UID: \"cc684d95-c43f-4d51-abca-fe8d1719d548\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-tb675" Oct 08 14:34:34 crc kubenswrapper[4624]: I1008 14:34:34.253036 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-8bkj8"] Oct 08 14:34:34 crc kubenswrapper[4624]: I1008 14:34:34.289539 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6ff968f97f-snrwv"] Oct 08 14:34:34 crc kubenswrapper[4624]: W1008 14:34:34.293606 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5396ecd_3671_4946_940f_ba3f7ffafde3.slice/crio-39cfd6bc0c37d578b855805102f56d9663458074b768d310e816bc409adaaa74 WatchSource:0}: Error finding container 39cfd6bc0c37d578b855805102f56d9663458074b768d310e816bc409adaaa74: Status 404 returned error can't find the container with id 39cfd6bc0c37d578b855805102f56d9663458074b768d310e816bc409adaaa74 Oct 08 14:34:34 crc kubenswrapper[4624]: I1008 14:34:34.404593 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-tb675" Oct 08 14:34:34 crc kubenswrapper[4624]: I1008 14:34:34.614422 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-tb675"] Oct 08 14:34:34 crc kubenswrapper[4624]: W1008 14:34:34.620284 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc684d95_c43f_4d51_abca_fe8d1719d548.slice/crio-c6b0a00073ac65c41b2be7c00b1cff8912ff4ba3cbe894a1b0cbaea77cb8b91b WatchSource:0}: Error finding container c6b0a00073ac65c41b2be7c00b1cff8912ff4ba3cbe894a1b0cbaea77cb8b91b: Status 404 returned error can't find the container with id c6b0a00073ac65c41b2be7c00b1cff8912ff4ba3cbe894a1b0cbaea77cb8b91b Oct 08 14:34:34 crc kubenswrapper[4624]: I1008 14:34:34.914112 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-tb675" event={"ID":"cc684d95-c43f-4d51-abca-fe8d1719d548","Type":"ContainerStarted","Data":"c6b0a00073ac65c41b2be7c00b1cff8912ff4ba3cbe894a1b0cbaea77cb8b91b"} Oct 08 14:34:34 crc kubenswrapper[4624]: I1008 14:34:34.915471 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-8bkj8" event={"ID":"96e0b6b6-38d2-491f-ae9a-5be4860d1da0","Type":"ContainerStarted","Data":"5007758d73c98014838d02a21abe94a1bbf893e0b1ec5705c34387da5250453e"} Oct 08 14:34:34 crc kubenswrapper[4624]: I1008 14:34:34.917846 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6ff968f97f-snrwv" event={"ID":"e5396ecd-3671-4946-940f-ba3f7ffafde3","Type":"ContainerStarted","Data":"c63b0cc0183e85119bfe9fe390292f52c07b9531704e40769c90fc6df1605eeb"} Oct 08 14:34:34 crc kubenswrapper[4624]: I1008 14:34:34.917903 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6ff968f97f-snrwv" event={"ID":"e5396ecd-3671-4946-940f-ba3f7ffafde3","Type":"ContainerStarted","Data":"39cfd6bc0c37d578b855805102f56d9663458074b768d310e816bc409adaaa74"} Oct 08 14:34:34 crc kubenswrapper[4624]: I1008 14:34:34.920149 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-jbwmj" event={"ID":"d82b9420-16a1-4ddf-9239-7b649b9429d2","Type":"ContainerStarted","Data":"cfb1728fb987c7a8216af16cc91600fe670daed6b8330b6075a86e3b9304c713"} Oct 08 14:34:34 crc kubenswrapper[4624]: I1008 14:34:34.938188 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6ff968f97f-snrwv" podStartSLOduration=1.9381702490000001 podStartE2EDuration="1.938170249s" podCreationTimestamp="2025-10-08 14:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:34:34.93082837 +0000 UTC m=+700.081763447" watchObservedRunningTime="2025-10-08 14:34:34.938170249 +0000 UTC m=+700.089105316" Oct 08 14:34:37 crc kubenswrapper[4624]: I1008 14:34:37.951333 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-jbwmj" event={"ID":"d82b9420-16a1-4ddf-9239-7b649b9429d2","Type":"ContainerStarted","Data":"6801c5da1be6ea3f14557c4ee11fbe02995a41f7b5eeff8e20decccf0e424de2"} Oct 08 14:34:37 crc kubenswrapper[4624]: I1008 14:34:37.951940 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-jbwmj" Oct 08 14:34:37 crc kubenswrapper[4624]: I1008 14:34:37.953190 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qfgz8" event={"ID":"e42689fb-9ba3-4ab8-8e3d-8ca52a091c49","Type":"ContainerStarted","Data":"7d86faee683f3f383a7a319772b377091170e796c6f284a47fdf384fadcba9d0"} Oct 08 14:34:37 crc kubenswrapper[4624]: I1008 14:34:37.953359 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-qfgz8" Oct 08 14:34:37 crc kubenswrapper[4624]: I1008 14:34:37.955342 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-tb675" event={"ID":"cc684d95-c43f-4d51-abca-fe8d1719d548","Type":"ContainerStarted","Data":"47b9a6b24bc11c35b306cc0abee7ee9063b908de6dd06534321611e266683229"} Oct 08 14:34:37 crc kubenswrapper[4624]: I1008 14:34:37.957500 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-8bkj8" event={"ID":"96e0b6b6-38d2-491f-ae9a-5be4860d1da0","Type":"ContainerStarted","Data":"e9d02910d895e31ceb89ae8c07dd97d76db583cd07b95b116e538e9060f45aa7"} Oct 08 14:34:37 crc kubenswrapper[4624]: I1008 14:34:37.975117 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-jbwmj" podStartSLOduration=1.726505897 podStartE2EDuration="4.975102701s" podCreationTimestamp="2025-10-08 14:34:33 +0000 UTC" firstStartedPulling="2025-10-08 14:34:34.107897542 +0000 UTC m=+699.258832619" lastFinishedPulling="2025-10-08 14:34:37.356494346 +0000 UTC m=+702.507429423" observedRunningTime="2025-10-08 14:34:37.971288453 +0000 UTC m=+703.122223530" watchObservedRunningTime="2025-10-08 14:34:37.975102701 +0000 UTC m=+703.126037778" Oct 08 14:34:38 crc kubenswrapper[4624]: I1008 14:34:38.016963 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-tb675" podStartSLOduration=2.283738574 podStartE2EDuration="5.016938356s" podCreationTimestamp="2025-10-08 14:34:33 +0000 UTC" firstStartedPulling="2025-10-08 14:34:34.622737119 +0000 UTC m=+699.773672196" lastFinishedPulling="2025-10-08 14:34:37.355936901 +0000 UTC m=+702.506871978" observedRunningTime="2025-10-08 14:34:38.006719364 +0000 UTC m=+703.157654441" watchObservedRunningTime="2025-10-08 14:34:38.016938356 +0000 UTC m=+703.167873433" Oct 08 14:34:38 crc kubenswrapper[4624]: I1008 14:34:38.028586 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-qfgz8" podStartSLOduration=1.447465092 podStartE2EDuration="5.028557615s" podCreationTimestamp="2025-10-08 14:34:33 +0000 UTC" firstStartedPulling="2025-10-08 14:34:33.774859719 +0000 UTC m=+698.925794796" lastFinishedPulling="2025-10-08 14:34:37.355952242 +0000 UTC m=+702.506887319" observedRunningTime="2025-10-08 14:34:38.02835855 +0000 UTC m=+703.179293637" watchObservedRunningTime="2025-10-08 14:34:38.028557615 +0000 UTC m=+703.179492692" Oct 08 14:34:40 crc kubenswrapper[4624]: I1008 14:34:40.974401 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-8bkj8" event={"ID":"96e0b6b6-38d2-491f-ae9a-5be4860d1da0","Type":"ContainerStarted","Data":"49846ad48842897be411a4a98819678cb9646226e6de293a9fe7df92c3150863"} Oct 08 14:34:43 crc kubenswrapper[4624]: I1008 14:34:43.723797 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-qfgz8" Oct 08 14:34:43 crc kubenswrapper[4624]: I1008 14:34:43.739795 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-8bkj8" podStartSLOduration=4.556647251 podStartE2EDuration="10.739761794s" podCreationTimestamp="2025-10-08 14:34:33 +0000 UTC" firstStartedPulling="2025-10-08 14:34:34.259015317 +0000 UTC m=+699.409950394" lastFinishedPulling="2025-10-08 14:34:40.44212986 +0000 UTC m=+705.593064937" observedRunningTime="2025-10-08 14:34:40.993796084 +0000 UTC m=+706.144731161" watchObservedRunningTime="2025-10-08 14:34:43.739761794 +0000 UTC m=+708.890696871" Oct 08 14:34:44 crc kubenswrapper[4624]: I1008 14:34:44.040163 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:44 crc kubenswrapper[4624]: I1008 14:34:44.040260 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:44 crc kubenswrapper[4624]: I1008 14:34:44.045792 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:44 crc kubenswrapper[4624]: I1008 14:34:44.998477 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6ff968f97f-snrwv" Oct 08 14:34:45 crc kubenswrapper[4624]: I1008 14:34:45.046928 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rgn2q"] Oct 08 14:34:53 crc kubenswrapper[4624]: I1008 14:34:53.693791 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-jbwmj" Oct 08 14:35:00 crc kubenswrapper[4624]: I1008 14:35:00.077716 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:35:00 crc kubenswrapper[4624]: I1008 14:35:00.078254 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:35:07 crc kubenswrapper[4624]: I1008 14:35:07.835505 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs"] Oct 08 14:35:07 crc kubenswrapper[4624]: I1008 14:35:07.837300 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs" Oct 08 14:35:07 crc kubenswrapper[4624]: I1008 14:35:07.842225 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 08 14:35:07 crc kubenswrapper[4624]: I1008 14:35:07.851713 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs"] Oct 08 14:35:07 crc kubenswrapper[4624]: I1008 14:35:07.876939 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fggq5\" (UniqueName: \"kubernetes.io/projected/d35debc8-9835-42bc-833e-7412681e9a4d-kube-api-access-fggq5\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs\" (UID: \"d35debc8-9835-42bc-833e-7412681e9a4d\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs" Oct 08 14:35:07 crc kubenswrapper[4624]: I1008 14:35:07.876996 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d35debc8-9835-42bc-833e-7412681e9a4d-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs\" (UID: \"d35debc8-9835-42bc-833e-7412681e9a4d\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs" Oct 08 14:35:07 crc kubenswrapper[4624]: I1008 14:35:07.877020 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d35debc8-9835-42bc-833e-7412681e9a4d-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs\" (UID: \"d35debc8-9835-42bc-833e-7412681e9a4d\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs" Oct 08 14:35:07 crc kubenswrapper[4624]: I1008 14:35:07.978339 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fggq5\" (UniqueName: \"kubernetes.io/projected/d35debc8-9835-42bc-833e-7412681e9a4d-kube-api-access-fggq5\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs\" (UID: \"d35debc8-9835-42bc-833e-7412681e9a4d\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs" Oct 08 14:35:07 crc kubenswrapper[4624]: I1008 14:35:07.978421 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d35debc8-9835-42bc-833e-7412681e9a4d-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs\" (UID: \"d35debc8-9835-42bc-833e-7412681e9a4d\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs" Oct 08 14:35:07 crc kubenswrapper[4624]: I1008 14:35:07.978442 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d35debc8-9835-42bc-833e-7412681e9a4d-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs\" (UID: \"d35debc8-9835-42bc-833e-7412681e9a4d\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs" Oct 08 14:35:07 crc kubenswrapper[4624]: I1008 14:35:07.979051 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d35debc8-9835-42bc-833e-7412681e9a4d-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs\" (UID: \"d35debc8-9835-42bc-833e-7412681e9a4d\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs" Oct 08 14:35:07 crc kubenswrapper[4624]: I1008 14:35:07.979115 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d35debc8-9835-42bc-833e-7412681e9a4d-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs\" (UID: \"d35debc8-9835-42bc-833e-7412681e9a4d\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs" Oct 08 14:35:07 crc kubenswrapper[4624]: I1008 14:35:07.997254 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fggq5\" (UniqueName: \"kubernetes.io/projected/d35debc8-9835-42bc-833e-7412681e9a4d-kube-api-access-fggq5\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs\" (UID: \"d35debc8-9835-42bc-833e-7412681e9a4d\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs" Oct 08 14:35:08 crc kubenswrapper[4624]: I1008 14:35:08.154263 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs" Oct 08 14:35:08 crc kubenswrapper[4624]: I1008 14:35:08.339339 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs"] Oct 08 14:35:08 crc kubenswrapper[4624]: W1008 14:35:08.344981 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd35debc8_9835_42bc_833e_7412681e9a4d.slice/crio-1c0784db2719363dab11619085b5e7a6a7805c5472ba2ba1eabf9832a9535585 WatchSource:0}: Error finding container 1c0784db2719363dab11619085b5e7a6a7805c5472ba2ba1eabf9832a9535585: Status 404 returned error can't find the container with id 1c0784db2719363dab11619085b5e7a6a7805c5472ba2ba1eabf9832a9535585 Oct 08 14:35:09 crc kubenswrapper[4624]: I1008 14:35:09.118412 4624 generic.go:334] "Generic (PLEG): container finished" podID="d35debc8-9835-42bc-833e-7412681e9a4d" containerID="82618a3115ff1cd76aba2abe872c71554ca4b1cba2eb51d44551184d134db49f" exitCode=0 Oct 08 14:35:09 crc kubenswrapper[4624]: I1008 14:35:09.118454 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs" event={"ID":"d35debc8-9835-42bc-833e-7412681e9a4d","Type":"ContainerDied","Data":"82618a3115ff1cd76aba2abe872c71554ca4b1cba2eb51d44551184d134db49f"} Oct 08 14:35:09 crc kubenswrapper[4624]: I1008 14:35:09.118479 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs" event={"ID":"d35debc8-9835-42bc-833e-7412681e9a4d","Type":"ContainerStarted","Data":"1c0784db2719363dab11619085b5e7a6a7805c5472ba2ba1eabf9832a9535585"} Oct 08 14:35:10 crc kubenswrapper[4624]: I1008 14:35:10.089494 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-rgn2q" podUID="82676f42-aabb-4cee-b836-790a48dd9a2e" containerName="console" containerID="cri-o://7058fb5813ed3fa07b017c20a830d69435fedbab1079e27e62d8e7b61f6f38bc" gracePeriod=15 Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.136816 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rgn2q_82676f42-aabb-4cee-b836-790a48dd9a2e/console/0.log" Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.136858 4624 generic.go:334] "Generic (PLEG): container finished" podID="82676f42-aabb-4cee-b836-790a48dd9a2e" containerID="7058fb5813ed3fa07b017c20a830d69435fedbab1079e27e62d8e7b61f6f38bc" exitCode=2 Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.136884 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rgn2q" event={"ID":"82676f42-aabb-4cee-b836-790a48dd9a2e","Type":"ContainerDied","Data":"7058fb5813ed3fa07b017c20a830d69435fedbab1079e27e62d8e7b61f6f38bc"} Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.248359 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rgn2q_82676f42-aabb-4cee-b836-790a48dd9a2e/console/0.log" Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.248447 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.332994 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-console-config\") pod \"82676f42-aabb-4cee-b836-790a48dd9a2e\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.333343 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-oauth-serving-cert\") pod \"82676f42-aabb-4cee-b836-790a48dd9a2e\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.333367 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtf92\" (UniqueName: \"kubernetes.io/projected/82676f42-aabb-4cee-b836-790a48dd9a2e-kube-api-access-qtf92\") pod \"82676f42-aabb-4cee-b836-790a48dd9a2e\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.333394 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-service-ca\") pod \"82676f42-aabb-4cee-b836-790a48dd9a2e\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.333417 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/82676f42-aabb-4cee-b836-790a48dd9a2e-console-serving-cert\") pod \"82676f42-aabb-4cee-b836-790a48dd9a2e\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.333432 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/82676f42-aabb-4cee-b836-790a48dd9a2e-console-oauth-config\") pod \"82676f42-aabb-4cee-b836-790a48dd9a2e\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.333450 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-trusted-ca-bundle\") pod \"82676f42-aabb-4cee-b836-790a48dd9a2e\" (UID: \"82676f42-aabb-4cee-b836-790a48dd9a2e\") " Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.333984 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-service-ca" (OuterVolumeSpecName: "service-ca") pod "82676f42-aabb-4cee-b836-790a48dd9a2e" (UID: "82676f42-aabb-4cee-b836-790a48dd9a2e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.334328 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "82676f42-aabb-4cee-b836-790a48dd9a2e" (UID: "82676f42-aabb-4cee-b836-790a48dd9a2e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.334505 4624 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-service-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.334525 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.334683 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "82676f42-aabb-4cee-b836-790a48dd9a2e" (UID: "82676f42-aabb-4cee-b836-790a48dd9a2e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.334727 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-console-config" (OuterVolumeSpecName: "console-config") pod "82676f42-aabb-4cee-b836-790a48dd9a2e" (UID: "82676f42-aabb-4cee-b836-790a48dd9a2e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.338426 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82676f42-aabb-4cee-b836-790a48dd9a2e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "82676f42-aabb-4cee-b836-790a48dd9a2e" (UID: "82676f42-aabb-4cee-b836-790a48dd9a2e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.339056 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82676f42-aabb-4cee-b836-790a48dd9a2e-kube-api-access-qtf92" (OuterVolumeSpecName: "kube-api-access-qtf92") pod "82676f42-aabb-4cee-b836-790a48dd9a2e" (UID: "82676f42-aabb-4cee-b836-790a48dd9a2e"). InnerVolumeSpecName "kube-api-access-qtf92". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.341891 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82676f42-aabb-4cee-b836-790a48dd9a2e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "82676f42-aabb-4cee-b836-790a48dd9a2e" (UID: "82676f42-aabb-4cee-b836-790a48dd9a2e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.435660 4624 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-console-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.435700 4624 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/82676f42-aabb-4cee-b836-790a48dd9a2e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.435714 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtf92\" (UniqueName: \"kubernetes.io/projected/82676f42-aabb-4cee-b836-790a48dd9a2e-kube-api-access-qtf92\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.435728 4624 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/82676f42-aabb-4cee-b836-790a48dd9a2e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.435738 4624 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/82676f42-aabb-4cee-b836-790a48dd9a2e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:11 crc kubenswrapper[4624]: I1008 14:35:11.571052 4624 scope.go:117] "RemoveContainer" containerID="7058fb5813ed3fa07b017c20a830d69435fedbab1079e27e62d8e7b61f6f38bc" Oct 08 14:35:12 crc kubenswrapper[4624]: I1008 14:35:12.143173 4624 generic.go:334] "Generic (PLEG): container finished" podID="d35debc8-9835-42bc-833e-7412681e9a4d" containerID="4a7edd7b13da2f0617ebb15efce5bd93d9a915d0e4a81050357a0e44a420fbfd" exitCode=0 Oct 08 14:35:12 crc kubenswrapper[4624]: I1008 14:35:12.143245 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rgn2q" Oct 08 14:35:12 crc kubenswrapper[4624]: I1008 14:35:12.143259 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs" event={"ID":"d35debc8-9835-42bc-833e-7412681e9a4d","Type":"ContainerDied","Data":"4a7edd7b13da2f0617ebb15efce5bd93d9a915d0e4a81050357a0e44a420fbfd"} Oct 08 14:35:12 crc kubenswrapper[4624]: I1008 14:35:12.143306 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rgn2q" event={"ID":"82676f42-aabb-4cee-b836-790a48dd9a2e","Type":"ContainerDied","Data":"866dc6618eb46a6a1619518e41b7fc3c7779e8fde3fbdfb9e62e4f52fa35d82e"} Oct 08 14:35:12 crc kubenswrapper[4624]: I1008 14:35:12.194604 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rgn2q"] Oct 08 14:35:12 crc kubenswrapper[4624]: I1008 14:35:12.198698 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-rgn2q"] Oct 08 14:35:13 crc kubenswrapper[4624]: I1008 14:35:13.149332 4624 generic.go:334] "Generic (PLEG): container finished" podID="d35debc8-9835-42bc-833e-7412681e9a4d" containerID="8cfa2e858f72c3956864be7c1ee92f24488dfa3dcc1f9c63519bdbfcd1e80228" exitCode=0 Oct 08 14:35:13 crc kubenswrapper[4624]: I1008 14:35:13.149412 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs" event={"ID":"d35debc8-9835-42bc-833e-7412681e9a4d","Type":"ContainerDied","Data":"8cfa2e858f72c3956864be7c1ee92f24488dfa3dcc1f9c63519bdbfcd1e80228"} Oct 08 14:35:13 crc kubenswrapper[4624]: I1008 14:35:13.472581 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82676f42-aabb-4cee-b836-790a48dd9a2e" path="/var/lib/kubelet/pods/82676f42-aabb-4cee-b836-790a48dd9a2e/volumes" Oct 08 14:35:14 crc kubenswrapper[4624]: I1008 14:35:14.359595 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs" Oct 08 14:35:14 crc kubenswrapper[4624]: I1008 14:35:14.472234 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fggq5\" (UniqueName: \"kubernetes.io/projected/d35debc8-9835-42bc-833e-7412681e9a4d-kube-api-access-fggq5\") pod \"d35debc8-9835-42bc-833e-7412681e9a4d\" (UID: \"d35debc8-9835-42bc-833e-7412681e9a4d\") " Oct 08 14:35:14 crc kubenswrapper[4624]: I1008 14:35:14.472277 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d35debc8-9835-42bc-833e-7412681e9a4d-bundle\") pod \"d35debc8-9835-42bc-833e-7412681e9a4d\" (UID: \"d35debc8-9835-42bc-833e-7412681e9a4d\") " Oct 08 14:35:14 crc kubenswrapper[4624]: I1008 14:35:14.472334 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d35debc8-9835-42bc-833e-7412681e9a4d-util\") pod \"d35debc8-9835-42bc-833e-7412681e9a4d\" (UID: \"d35debc8-9835-42bc-833e-7412681e9a4d\") " Oct 08 14:35:14 crc kubenswrapper[4624]: I1008 14:35:14.473480 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d35debc8-9835-42bc-833e-7412681e9a4d-bundle" (OuterVolumeSpecName: "bundle") pod "d35debc8-9835-42bc-833e-7412681e9a4d" (UID: "d35debc8-9835-42bc-833e-7412681e9a4d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:35:14 crc kubenswrapper[4624]: I1008 14:35:14.480855 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d35debc8-9835-42bc-833e-7412681e9a4d-kube-api-access-fggq5" (OuterVolumeSpecName: "kube-api-access-fggq5") pod "d35debc8-9835-42bc-833e-7412681e9a4d" (UID: "d35debc8-9835-42bc-833e-7412681e9a4d"). InnerVolumeSpecName "kube-api-access-fggq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:35:14 crc kubenswrapper[4624]: I1008 14:35:14.482648 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d35debc8-9835-42bc-833e-7412681e9a4d-util" (OuterVolumeSpecName: "util") pod "d35debc8-9835-42bc-833e-7412681e9a4d" (UID: "d35debc8-9835-42bc-833e-7412681e9a4d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:35:14 crc kubenswrapper[4624]: I1008 14:35:14.574521 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fggq5\" (UniqueName: \"kubernetes.io/projected/d35debc8-9835-42bc-833e-7412681e9a4d-kube-api-access-fggq5\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:14 crc kubenswrapper[4624]: I1008 14:35:14.574590 4624 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d35debc8-9835-42bc-833e-7412681e9a4d-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:14 crc kubenswrapper[4624]: I1008 14:35:14.574605 4624 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d35debc8-9835-42bc-833e-7412681e9a4d-util\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:15 crc kubenswrapper[4624]: I1008 14:35:15.161950 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs" event={"ID":"d35debc8-9835-42bc-833e-7412681e9a4d","Type":"ContainerDied","Data":"1c0784db2719363dab11619085b5e7a6a7805c5472ba2ba1eabf9832a9535585"} Oct 08 14:35:15 crc kubenswrapper[4624]: I1008 14:35:15.162003 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c0784db2719363dab11619085b5e7a6a7805c5472ba2ba1eabf9832a9535585" Oct 08 14:35:15 crc kubenswrapper[4624]: I1008 14:35:15.162070 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs" Oct 08 14:35:21 crc kubenswrapper[4624]: I1008 14:35:21.883601 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tw5tg"] Oct 08 14:35:21 crc kubenswrapper[4624]: I1008 14:35:21.884404 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" podUID="98001614-6da0-4175-854a-d9af45077799" containerName="controller-manager" containerID="cri-o://f448d24c4f28b42b4cc879c03d0945f5dca8c765a7ee06c2188c61153bb4a447" gracePeriod=30 Oct 08 14:35:21 crc kubenswrapper[4624]: I1008 14:35:21.902349 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc"] Oct 08 14:35:21 crc kubenswrapper[4624]: I1008 14:35:21.902618 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" podUID="48945eb6-75ee-4ed0-bc04-f83b5888d85d" containerName="route-controller-manager" containerID="cri-o://331a5768310336a3e86ae1c6e88b0d1d6a3f5379f6fcb6c7b2c6b869a5bcde76" gracePeriod=30 Oct 08 14:35:22 crc kubenswrapper[4624]: I1008 14:35:22.200198 4624 generic.go:334] "Generic (PLEG): container finished" podID="98001614-6da0-4175-854a-d9af45077799" containerID="f448d24c4f28b42b4cc879c03d0945f5dca8c765a7ee06c2188c61153bb4a447" exitCode=0 Oct 08 14:35:22 crc kubenswrapper[4624]: I1008 14:35:22.200289 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" event={"ID":"98001614-6da0-4175-854a-d9af45077799","Type":"ContainerDied","Data":"f448d24c4f28b42b4cc879c03d0945f5dca8c765a7ee06c2188c61153bb4a447"} Oct 08 14:35:22 crc kubenswrapper[4624]: I1008 14:35:22.202489 4624 generic.go:334] "Generic (PLEG): container finished" podID="48945eb6-75ee-4ed0-bc04-f83b5888d85d" containerID="331a5768310336a3e86ae1c6e88b0d1d6a3f5379f6fcb6c7b2c6b869a5bcde76" exitCode=0 Oct 08 14:35:22 crc kubenswrapper[4624]: I1008 14:35:22.202560 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" event={"ID":"48945eb6-75ee-4ed0-bc04-f83b5888d85d","Type":"ContainerDied","Data":"331a5768310336a3e86ae1c6e88b0d1d6a3f5379f6fcb6c7b2c6b869a5bcde76"} Oct 08 14:35:22 crc kubenswrapper[4624]: I1008 14:35:22.934575 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:35:22 crc kubenswrapper[4624]: I1008 14:35:22.939487 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.050310 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48945eb6-75ee-4ed0-bc04-f83b5888d85d-serving-cert\") pod \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\" (UID: \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\") " Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.050357 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48945eb6-75ee-4ed0-bc04-f83b5888d85d-config\") pod \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\" (UID: \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\") " Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.050376 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98001614-6da0-4175-854a-d9af45077799-client-ca\") pod \"98001614-6da0-4175-854a-d9af45077799\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.050409 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48945eb6-75ee-4ed0-bc04-f83b5888d85d-client-ca\") pod \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\" (UID: \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\") " Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.050501 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98001614-6da0-4175-854a-d9af45077799-config\") pod \"98001614-6da0-4175-854a-d9af45077799\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.050672 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7mjm\" (UniqueName: \"kubernetes.io/projected/98001614-6da0-4175-854a-d9af45077799-kube-api-access-k7mjm\") pod \"98001614-6da0-4175-854a-d9af45077799\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.050720 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98001614-6da0-4175-854a-d9af45077799-serving-cert\") pod \"98001614-6da0-4175-854a-d9af45077799\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.050738 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98001614-6da0-4175-854a-d9af45077799-proxy-ca-bundles\") pod \"98001614-6da0-4175-854a-d9af45077799\" (UID: \"98001614-6da0-4175-854a-d9af45077799\") " Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.050756 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hr6m\" (UniqueName: \"kubernetes.io/projected/48945eb6-75ee-4ed0-bc04-f83b5888d85d-kube-api-access-6hr6m\") pod \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\" (UID: \"48945eb6-75ee-4ed0-bc04-f83b5888d85d\") " Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.051180 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48945eb6-75ee-4ed0-bc04-f83b5888d85d-client-ca" (OuterVolumeSpecName: "client-ca") pod "48945eb6-75ee-4ed0-bc04-f83b5888d85d" (UID: "48945eb6-75ee-4ed0-bc04-f83b5888d85d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.051480 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48945eb6-75ee-4ed0-bc04-f83b5888d85d-config" (OuterVolumeSpecName: "config") pod "48945eb6-75ee-4ed0-bc04-f83b5888d85d" (UID: "48945eb6-75ee-4ed0-bc04-f83b5888d85d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.051570 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98001614-6da0-4175-854a-d9af45077799-client-ca" (OuterVolumeSpecName: "client-ca") pod "98001614-6da0-4175-854a-d9af45077799" (UID: "98001614-6da0-4175-854a-d9af45077799"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.051958 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98001614-6da0-4175-854a-d9af45077799-config" (OuterVolumeSpecName: "config") pod "98001614-6da0-4175-854a-d9af45077799" (UID: "98001614-6da0-4175-854a-d9af45077799"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.052183 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98001614-6da0-4175-854a-d9af45077799-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "98001614-6da0-4175-854a-d9af45077799" (UID: "98001614-6da0-4175-854a-d9af45077799"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.056996 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48945eb6-75ee-4ed0-bc04-f83b5888d85d-kube-api-access-6hr6m" (OuterVolumeSpecName: "kube-api-access-6hr6m") pod "48945eb6-75ee-4ed0-bc04-f83b5888d85d" (UID: "48945eb6-75ee-4ed0-bc04-f83b5888d85d"). InnerVolumeSpecName "kube-api-access-6hr6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.057380 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48945eb6-75ee-4ed0-bc04-f83b5888d85d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "48945eb6-75ee-4ed0-bc04-f83b5888d85d" (UID: "48945eb6-75ee-4ed0-bc04-f83b5888d85d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.064784 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98001614-6da0-4175-854a-d9af45077799-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "98001614-6da0-4175-854a-d9af45077799" (UID: "98001614-6da0-4175-854a-d9af45077799"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.064811 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98001614-6da0-4175-854a-d9af45077799-kube-api-access-k7mjm" (OuterVolumeSpecName: "kube-api-access-k7mjm") pod "98001614-6da0-4175-854a-d9af45077799" (UID: "98001614-6da0-4175-854a-d9af45077799"). InnerVolumeSpecName "kube-api-access-k7mjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.151802 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48945eb6-75ee-4ed0-bc04-f83b5888d85d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.152053 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48945eb6-75ee-4ed0-bc04-f83b5888d85d-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.152138 4624 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98001614-6da0-4175-854a-d9af45077799-client-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.152192 4624 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48945eb6-75ee-4ed0-bc04-f83b5888d85d-client-ca\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.152250 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98001614-6da0-4175-854a-d9af45077799-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.152447 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7mjm\" (UniqueName: \"kubernetes.io/projected/98001614-6da0-4175-854a-d9af45077799-kube-api-access-k7mjm\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.152532 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98001614-6da0-4175-854a-d9af45077799-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.152615 4624 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98001614-6da0-4175-854a-d9af45077799-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.152731 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hr6m\" (UniqueName: \"kubernetes.io/projected/48945eb6-75ee-4ed0-bc04-f83b5888d85d-kube-api-access-6hr6m\") on node \"crc\" DevicePath \"\"" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.208931 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.208933 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc" event={"ID":"48945eb6-75ee-4ed0-bc04-f83b5888d85d","Type":"ContainerDied","Data":"9403945eeb7b04834b446ecda7445d7327f2bcd88db65e49a6cc361100532351"} Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.209060 4624 scope.go:117] "RemoveContainer" containerID="331a5768310336a3e86ae1c6e88b0d1d6a3f5379f6fcb6c7b2c6b869a5bcde76" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.211385 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" event={"ID":"98001614-6da0-4175-854a-d9af45077799","Type":"ContainerDied","Data":"81880ff0a467bbfdf6eb7462f58eade146892b9689d6784de79c10c49a910410"} Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.211518 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tw5tg" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.224992 4624 scope.go:117] "RemoveContainer" containerID="f448d24c4f28b42b4cc879c03d0945f5dca8c765a7ee06c2188c61153bb4a447" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.256632 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc"] Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.260603 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gnrmc"] Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.279720 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tw5tg"] Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.283310 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tw5tg"] Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.473073 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48945eb6-75ee-4ed0-bc04-f83b5888d85d" path="/var/lib/kubelet/pods/48945eb6-75ee-4ed0-bc04-f83b5888d85d/volumes" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.473571 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98001614-6da0-4175-854a-d9af45077799" path="/var/lib/kubelet/pods/98001614-6da0-4175-854a-d9af45077799/volumes" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.779877 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq"] Oct 08 14:35:23 crc kubenswrapper[4624]: E1008 14:35:23.780429 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35debc8-9835-42bc-833e-7412681e9a4d" containerName="util" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.780445 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35debc8-9835-42bc-833e-7412681e9a4d" containerName="util" Oct 08 14:35:23 crc kubenswrapper[4624]: E1008 14:35:23.780455 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48945eb6-75ee-4ed0-bc04-f83b5888d85d" containerName="route-controller-manager" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.780461 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="48945eb6-75ee-4ed0-bc04-f83b5888d85d" containerName="route-controller-manager" Oct 08 14:35:23 crc kubenswrapper[4624]: E1008 14:35:23.780470 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35debc8-9835-42bc-833e-7412681e9a4d" containerName="pull" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.780476 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35debc8-9835-42bc-833e-7412681e9a4d" containerName="pull" Oct 08 14:35:23 crc kubenswrapper[4624]: E1008 14:35:23.780486 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82676f42-aabb-4cee-b836-790a48dd9a2e" containerName="console" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.780491 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="82676f42-aabb-4cee-b836-790a48dd9a2e" containerName="console" Oct 08 14:35:23 crc kubenswrapper[4624]: E1008 14:35:23.780501 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98001614-6da0-4175-854a-d9af45077799" containerName="controller-manager" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.780508 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="98001614-6da0-4175-854a-d9af45077799" containerName="controller-manager" Oct 08 14:35:23 crc kubenswrapper[4624]: E1008 14:35:23.780522 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35debc8-9835-42bc-833e-7412681e9a4d" containerName="extract" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.780527 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35debc8-9835-42bc-833e-7412681e9a4d" containerName="extract" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.780619 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d35debc8-9835-42bc-833e-7412681e9a4d" containerName="extract" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.780628 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="48945eb6-75ee-4ed0-bc04-f83b5888d85d" containerName="route-controller-manager" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.780650 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="82676f42-aabb-4cee-b836-790a48dd9a2e" containerName="console" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.780664 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="98001614-6da0-4175-854a-d9af45077799" containerName="controller-manager" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.781017 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.783991 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f5457f7c5-rwk7v"] Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.784817 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.787003 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.789352 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.789367 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.789514 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.789549 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.789851 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.790378 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.791747 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.794165 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f5457f7c5-rwk7v"] Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.794404 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.794522 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.794595 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.794864 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.800357 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.800486 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq"] Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.960555 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8gc6\" (UniqueName: \"kubernetes.io/projected/c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e-kube-api-access-l8gc6\") pod \"route-controller-manager-85f9997f66-5hdhq\" (UID: \"c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e\") " pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.960615 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78d7970c-4c88-4404-be6d-fd94b80b06fd-client-ca\") pod \"controller-manager-f5457f7c5-rwk7v\" (UID: \"78d7970c-4c88-4404-be6d-fd94b80b06fd\") " pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.960653 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrbvd\" (UniqueName: \"kubernetes.io/projected/78d7970c-4c88-4404-be6d-fd94b80b06fd-kube-api-access-lrbvd\") pod \"controller-manager-f5457f7c5-rwk7v\" (UID: \"78d7970c-4c88-4404-be6d-fd94b80b06fd\") " pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.960693 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78d7970c-4c88-4404-be6d-fd94b80b06fd-proxy-ca-bundles\") pod \"controller-manager-f5457f7c5-rwk7v\" (UID: \"78d7970c-4c88-4404-be6d-fd94b80b06fd\") " pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.960750 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78d7970c-4c88-4404-be6d-fd94b80b06fd-config\") pod \"controller-manager-f5457f7c5-rwk7v\" (UID: \"78d7970c-4c88-4404-be6d-fd94b80b06fd\") " pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.960777 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e-config\") pod \"route-controller-manager-85f9997f66-5hdhq\" (UID: \"c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e\") " pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.960813 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e-serving-cert\") pod \"route-controller-manager-85f9997f66-5hdhq\" (UID: \"c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e\") " pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.960837 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78d7970c-4c88-4404-be6d-fd94b80b06fd-serving-cert\") pod \"controller-manager-f5457f7c5-rwk7v\" (UID: \"78d7970c-4c88-4404-be6d-fd94b80b06fd\") " pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:23 crc kubenswrapper[4624]: I1008 14:35:23.960862 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e-client-ca\") pod \"route-controller-manager-85f9997f66-5hdhq\" (UID: \"c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e\") " pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.062551 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8gc6\" (UniqueName: \"kubernetes.io/projected/c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e-kube-api-access-l8gc6\") pod \"route-controller-manager-85f9997f66-5hdhq\" (UID: \"c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e\") " pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.062601 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78d7970c-4c88-4404-be6d-fd94b80b06fd-client-ca\") pod \"controller-manager-f5457f7c5-rwk7v\" (UID: \"78d7970c-4c88-4404-be6d-fd94b80b06fd\") " pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.062621 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrbvd\" (UniqueName: \"kubernetes.io/projected/78d7970c-4c88-4404-be6d-fd94b80b06fd-kube-api-access-lrbvd\") pod \"controller-manager-f5457f7c5-rwk7v\" (UID: \"78d7970c-4c88-4404-be6d-fd94b80b06fd\") " pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.062667 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78d7970c-4c88-4404-be6d-fd94b80b06fd-proxy-ca-bundles\") pod \"controller-manager-f5457f7c5-rwk7v\" (UID: \"78d7970c-4c88-4404-be6d-fd94b80b06fd\") " pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.062712 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78d7970c-4c88-4404-be6d-fd94b80b06fd-config\") pod \"controller-manager-f5457f7c5-rwk7v\" (UID: \"78d7970c-4c88-4404-be6d-fd94b80b06fd\") " pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.062742 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e-config\") pod \"route-controller-manager-85f9997f66-5hdhq\" (UID: \"c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e\") " pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.062768 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e-serving-cert\") pod \"route-controller-manager-85f9997f66-5hdhq\" (UID: \"c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e\") " pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.062788 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78d7970c-4c88-4404-be6d-fd94b80b06fd-serving-cert\") pod \"controller-manager-f5457f7c5-rwk7v\" (UID: \"78d7970c-4c88-4404-be6d-fd94b80b06fd\") " pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.062820 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e-client-ca\") pod \"route-controller-manager-85f9997f66-5hdhq\" (UID: \"c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e\") " pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.063873 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e-client-ca\") pod \"route-controller-manager-85f9997f66-5hdhq\" (UID: \"c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e\") " pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.063872 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78d7970c-4c88-4404-be6d-fd94b80b06fd-client-ca\") pod \"controller-manager-f5457f7c5-rwk7v\" (UID: \"78d7970c-4c88-4404-be6d-fd94b80b06fd\") " pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.064283 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78d7970c-4c88-4404-be6d-fd94b80b06fd-config\") pod \"controller-manager-f5457f7c5-rwk7v\" (UID: \"78d7970c-4c88-4404-be6d-fd94b80b06fd\") " pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.064322 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78d7970c-4c88-4404-be6d-fd94b80b06fd-proxy-ca-bundles\") pod \"controller-manager-f5457f7c5-rwk7v\" (UID: \"78d7970c-4c88-4404-be6d-fd94b80b06fd\") " pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.067207 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e-config\") pod \"route-controller-manager-85f9997f66-5hdhq\" (UID: \"c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e\") " pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.069453 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e-serving-cert\") pod \"route-controller-manager-85f9997f66-5hdhq\" (UID: \"c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e\") " pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.069455 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78d7970c-4c88-4404-be6d-fd94b80b06fd-serving-cert\") pod \"controller-manager-f5457f7c5-rwk7v\" (UID: \"78d7970c-4c88-4404-be6d-fd94b80b06fd\") " pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.080389 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8gc6\" (UniqueName: \"kubernetes.io/projected/c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e-kube-api-access-l8gc6\") pod \"route-controller-manager-85f9997f66-5hdhq\" (UID: \"c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e\") " pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.087617 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrbvd\" (UniqueName: \"kubernetes.io/projected/78d7970c-4c88-4404-be6d-fd94b80b06fd-kube-api-access-lrbvd\") pod \"controller-manager-f5457f7c5-rwk7v\" (UID: \"78d7970c-4c88-4404-be6d-fd94b80b06fd\") " pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.098264 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.110787 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.528742 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f5457f7c5-rwk7v"] Oct 08 14:35:24 crc kubenswrapper[4624]: I1008 14:35:24.720377 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq"] Oct 08 14:35:25 crc kubenswrapper[4624]: I1008 14:35:25.250102 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" event={"ID":"c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e","Type":"ContainerStarted","Data":"711aff43bd2e558ddbeedff2e4c55df858a16f990c7b327c5a743fb23d300289"} Oct 08 14:35:25 crc kubenswrapper[4624]: I1008 14:35:25.250155 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" event={"ID":"c1dcd16b-1b8d-4d0a-af62-5968ffef9a4e","Type":"ContainerStarted","Data":"f77eed9c1c87263e5a11d7798e0136f8bfff337bbe0c66bb946aecccd7a0c097"} Oct 08 14:35:25 crc kubenswrapper[4624]: I1008 14:35:25.250206 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" Oct 08 14:35:25 crc kubenswrapper[4624]: I1008 14:35:25.258363 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" event={"ID":"78d7970c-4c88-4404-be6d-fd94b80b06fd","Type":"ContainerStarted","Data":"fd76a9eeacfb4b007ac0bb82a9f1ede62ee4c5089b20887a4815a6e5a49fcbce"} Oct 08 14:35:25 crc kubenswrapper[4624]: I1008 14:35:25.258420 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" event={"ID":"78d7970c-4c88-4404-be6d-fd94b80b06fd","Type":"ContainerStarted","Data":"b24e3c8902bdf24211a54badfe6c43623507bc19bf56e0a725fac61ff937a215"} Oct 08 14:35:25 crc kubenswrapper[4624]: I1008 14:35:25.258837 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:25 crc kubenswrapper[4624]: I1008 14:35:25.270255 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" Oct 08 14:35:25 crc kubenswrapper[4624]: I1008 14:35:25.273457 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" podStartSLOduration=3.273439841 podStartE2EDuration="3.273439841s" podCreationTimestamp="2025-10-08 14:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:35:25.272409594 +0000 UTC m=+750.423344671" watchObservedRunningTime="2025-10-08 14:35:25.273439841 +0000 UTC m=+750.424374908" Oct 08 14:35:25 crc kubenswrapper[4624]: I1008 14:35:25.717830 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85f9997f66-5hdhq" Oct 08 14:35:25 crc kubenswrapper[4624]: I1008 14:35:25.740274 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f5457f7c5-rwk7v" podStartSLOduration=3.740253782 podStartE2EDuration="3.740253782s" podCreationTimestamp="2025-10-08 14:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:35:25.293275282 +0000 UTC m=+750.444210359" watchObservedRunningTime="2025-10-08 14:35:25.740253782 +0000 UTC m=+750.891188859" Oct 08 14:35:29 crc kubenswrapper[4624]: I1008 14:35:29.905515 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c"] Oct 08 14:35:29 crc kubenswrapper[4624]: I1008 14:35:29.906937 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c" Oct 08 14:35:29 crc kubenswrapper[4624]: I1008 14:35:29.908867 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 08 14:35:29 crc kubenswrapper[4624]: I1008 14:35:29.911777 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 08 14:35:29 crc kubenswrapper[4624]: I1008 14:35:29.912083 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 08 14:35:29 crc kubenswrapper[4624]: I1008 14:35:29.912250 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-tbqm4" Oct 08 14:35:29 crc kubenswrapper[4624]: I1008 14:35:29.918567 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 08 14:35:29 crc kubenswrapper[4624]: I1008 14:35:29.939909 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c"] Oct 08 14:35:29 crc kubenswrapper[4624]: I1008 14:35:29.991286 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrpmr\" (UniqueName: \"kubernetes.io/projected/38a79520-0ffe-4c51-8cba-117b74d7e7c8-kube-api-access-nrpmr\") pod \"metallb-operator-controller-manager-bb6fb947b-d4j7c\" (UID: \"38a79520-0ffe-4c51-8cba-117b74d7e7c8\") " pod="metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c" Oct 08 14:35:29 crc kubenswrapper[4624]: I1008 14:35:29.991434 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/38a79520-0ffe-4c51-8cba-117b74d7e7c8-webhook-cert\") pod \"metallb-operator-controller-manager-bb6fb947b-d4j7c\" (UID: \"38a79520-0ffe-4c51-8cba-117b74d7e7c8\") " pod="metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c" Oct 08 14:35:29 crc kubenswrapper[4624]: I1008 14:35:29.991470 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/38a79520-0ffe-4c51-8cba-117b74d7e7c8-apiservice-cert\") pod \"metallb-operator-controller-manager-bb6fb947b-d4j7c\" (UID: \"38a79520-0ffe-4c51-8cba-117b74d7e7c8\") " pod="metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.076204 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.076494 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.092920 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/38a79520-0ffe-4c51-8cba-117b74d7e7c8-webhook-cert\") pod \"metallb-operator-controller-manager-bb6fb947b-d4j7c\" (UID: \"38a79520-0ffe-4c51-8cba-117b74d7e7c8\") " pod="metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.092975 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/38a79520-0ffe-4c51-8cba-117b74d7e7c8-apiservice-cert\") pod \"metallb-operator-controller-manager-bb6fb947b-d4j7c\" (UID: \"38a79520-0ffe-4c51-8cba-117b74d7e7c8\") " pod="metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.093019 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrpmr\" (UniqueName: \"kubernetes.io/projected/38a79520-0ffe-4c51-8cba-117b74d7e7c8-kube-api-access-nrpmr\") pod \"metallb-operator-controller-manager-bb6fb947b-d4j7c\" (UID: \"38a79520-0ffe-4c51-8cba-117b74d7e7c8\") " pod="metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.103286 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/38a79520-0ffe-4c51-8cba-117b74d7e7c8-webhook-cert\") pod \"metallb-operator-controller-manager-bb6fb947b-d4j7c\" (UID: \"38a79520-0ffe-4c51-8cba-117b74d7e7c8\") " pod="metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.120306 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/38a79520-0ffe-4c51-8cba-117b74d7e7c8-apiservice-cert\") pod \"metallb-operator-controller-manager-bb6fb947b-d4j7c\" (UID: \"38a79520-0ffe-4c51-8cba-117b74d7e7c8\") " pod="metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.136466 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrpmr\" (UniqueName: \"kubernetes.io/projected/38a79520-0ffe-4c51-8cba-117b74d7e7c8-kube-api-access-nrpmr\") pod \"metallb-operator-controller-manager-bb6fb947b-d4j7c\" (UID: \"38a79520-0ffe-4c51-8cba-117b74d7e7c8\") " pod="metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.228229 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.386439 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5"] Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.401791 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.416945 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.417283 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-jstg4" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.417485 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.426931 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5"] Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.500261 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsbd6\" (UniqueName: \"kubernetes.io/projected/b8dd1cb9-7393-4033-abac-22f7aafa0235-kube-api-access-lsbd6\") pod \"metallb-operator-webhook-server-7df686497b-8bpg5\" (UID: \"b8dd1cb9-7393-4033-abac-22f7aafa0235\") " pod="metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.500327 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8dd1cb9-7393-4033-abac-22f7aafa0235-webhook-cert\") pod \"metallb-operator-webhook-server-7df686497b-8bpg5\" (UID: \"b8dd1cb9-7393-4033-abac-22f7aafa0235\") " pod="metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.500384 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b8dd1cb9-7393-4033-abac-22f7aafa0235-apiservice-cert\") pod \"metallb-operator-webhook-server-7df686497b-8bpg5\" (UID: \"b8dd1cb9-7393-4033-abac-22f7aafa0235\") " pod="metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.588924 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c"] Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.601545 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b8dd1cb9-7393-4033-abac-22f7aafa0235-apiservice-cert\") pod \"metallb-operator-webhook-server-7df686497b-8bpg5\" (UID: \"b8dd1cb9-7393-4033-abac-22f7aafa0235\") " pod="metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.601643 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsbd6\" (UniqueName: \"kubernetes.io/projected/b8dd1cb9-7393-4033-abac-22f7aafa0235-kube-api-access-lsbd6\") pod \"metallb-operator-webhook-server-7df686497b-8bpg5\" (UID: \"b8dd1cb9-7393-4033-abac-22f7aafa0235\") " pod="metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.601690 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8dd1cb9-7393-4033-abac-22f7aafa0235-webhook-cert\") pod \"metallb-operator-webhook-server-7df686497b-8bpg5\" (UID: \"b8dd1cb9-7393-4033-abac-22f7aafa0235\") " pod="metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5" Oct 08 14:35:30 crc kubenswrapper[4624]: W1008 14:35:30.601501 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38a79520_0ffe_4c51_8cba_117b74d7e7c8.slice/crio-d7caa746e65cb7486decc8f9072ade6e9c8c69f04c6929f528a5d80e9b009908 WatchSource:0}: Error finding container d7caa746e65cb7486decc8f9072ade6e9c8c69f04c6929f528a5d80e9b009908: Status 404 returned error can't find the container with id d7caa746e65cb7486decc8f9072ade6e9c8c69f04c6929f528a5d80e9b009908 Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.608719 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b8dd1cb9-7393-4033-abac-22f7aafa0235-apiservice-cert\") pod \"metallb-operator-webhook-server-7df686497b-8bpg5\" (UID: \"b8dd1cb9-7393-4033-abac-22f7aafa0235\") " pod="metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.611068 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8dd1cb9-7393-4033-abac-22f7aafa0235-webhook-cert\") pod \"metallb-operator-webhook-server-7df686497b-8bpg5\" (UID: \"b8dd1cb9-7393-4033-abac-22f7aafa0235\") " pod="metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.623663 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsbd6\" (UniqueName: \"kubernetes.io/projected/b8dd1cb9-7393-4033-abac-22f7aafa0235-kube-api-access-lsbd6\") pod \"metallb-operator-webhook-server-7df686497b-8bpg5\" (UID: \"b8dd1cb9-7393-4033-abac-22f7aafa0235\") " pod="metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5" Oct 08 14:35:30 crc kubenswrapper[4624]: I1008 14:35:30.742209 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5" Oct 08 14:35:31 crc kubenswrapper[4624]: I1008 14:35:31.240601 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5"] Oct 08 14:35:31 crc kubenswrapper[4624]: W1008 14:35:31.250566 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8dd1cb9_7393_4033_abac_22f7aafa0235.slice/crio-9732df905df3ffc9dad9689cade7b126bbbcbd5d3347bb6d567bca0fd57e1acc WatchSource:0}: Error finding container 9732df905df3ffc9dad9689cade7b126bbbcbd5d3347bb6d567bca0fd57e1acc: Status 404 returned error can't find the container with id 9732df905df3ffc9dad9689cade7b126bbbcbd5d3347bb6d567bca0fd57e1acc Oct 08 14:35:31 crc kubenswrapper[4624]: I1008 14:35:31.294965 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5" event={"ID":"b8dd1cb9-7393-4033-abac-22f7aafa0235","Type":"ContainerStarted","Data":"9732df905df3ffc9dad9689cade7b126bbbcbd5d3347bb6d567bca0fd57e1acc"} Oct 08 14:35:31 crc kubenswrapper[4624]: I1008 14:35:31.296083 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c" event={"ID":"38a79520-0ffe-4c51-8cba-117b74d7e7c8","Type":"ContainerStarted","Data":"d7caa746e65cb7486decc8f9072ade6e9c8c69f04c6929f528a5d80e9b009908"} Oct 08 14:35:31 crc kubenswrapper[4624]: I1008 14:35:31.629272 4624 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 08 14:35:36 crc kubenswrapper[4624]: I1008 14:35:36.803779 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fln7m"] Oct 08 14:35:36 crc kubenswrapper[4624]: I1008 14:35:36.805331 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fln7m" Oct 08 14:35:36 crc kubenswrapper[4624]: I1008 14:35:36.817544 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fln7m"] Oct 08 14:35:36 crc kubenswrapper[4624]: I1008 14:35:36.895579 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1293f8-7a7c-4557-84a5-d3d56c347685-utilities\") pod \"redhat-marketplace-fln7m\" (UID: \"6b1293f8-7a7c-4557-84a5-d3d56c347685\") " pod="openshift-marketplace/redhat-marketplace-fln7m" Oct 08 14:35:36 crc kubenswrapper[4624]: I1008 14:35:36.895659 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1293f8-7a7c-4557-84a5-d3d56c347685-catalog-content\") pod \"redhat-marketplace-fln7m\" (UID: \"6b1293f8-7a7c-4557-84a5-d3d56c347685\") " pod="openshift-marketplace/redhat-marketplace-fln7m" Oct 08 14:35:36 crc kubenswrapper[4624]: I1008 14:35:36.895722 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk9rf\" (UniqueName: \"kubernetes.io/projected/6b1293f8-7a7c-4557-84a5-d3d56c347685-kube-api-access-gk9rf\") pod \"redhat-marketplace-fln7m\" (UID: \"6b1293f8-7a7c-4557-84a5-d3d56c347685\") " pod="openshift-marketplace/redhat-marketplace-fln7m" Oct 08 14:35:36 crc kubenswrapper[4624]: I1008 14:35:36.997051 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1293f8-7a7c-4557-84a5-d3d56c347685-utilities\") pod \"redhat-marketplace-fln7m\" (UID: \"6b1293f8-7a7c-4557-84a5-d3d56c347685\") " pod="openshift-marketplace/redhat-marketplace-fln7m" Oct 08 14:35:36 crc kubenswrapper[4624]: I1008 14:35:36.997094 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1293f8-7a7c-4557-84a5-d3d56c347685-catalog-content\") pod \"redhat-marketplace-fln7m\" (UID: \"6b1293f8-7a7c-4557-84a5-d3d56c347685\") " pod="openshift-marketplace/redhat-marketplace-fln7m" Oct 08 14:35:36 crc kubenswrapper[4624]: I1008 14:35:36.997133 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk9rf\" (UniqueName: \"kubernetes.io/projected/6b1293f8-7a7c-4557-84a5-d3d56c347685-kube-api-access-gk9rf\") pod \"redhat-marketplace-fln7m\" (UID: \"6b1293f8-7a7c-4557-84a5-d3d56c347685\") " pod="openshift-marketplace/redhat-marketplace-fln7m" Oct 08 14:35:36 crc kubenswrapper[4624]: I1008 14:35:36.997552 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1293f8-7a7c-4557-84a5-d3d56c347685-utilities\") pod \"redhat-marketplace-fln7m\" (UID: \"6b1293f8-7a7c-4557-84a5-d3d56c347685\") " pod="openshift-marketplace/redhat-marketplace-fln7m" Oct 08 14:35:36 crc kubenswrapper[4624]: I1008 14:35:36.997851 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1293f8-7a7c-4557-84a5-d3d56c347685-catalog-content\") pod \"redhat-marketplace-fln7m\" (UID: \"6b1293f8-7a7c-4557-84a5-d3d56c347685\") " pod="openshift-marketplace/redhat-marketplace-fln7m" Oct 08 14:35:37 crc kubenswrapper[4624]: I1008 14:35:37.030383 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk9rf\" (UniqueName: \"kubernetes.io/projected/6b1293f8-7a7c-4557-84a5-d3d56c347685-kube-api-access-gk9rf\") pod \"redhat-marketplace-fln7m\" (UID: \"6b1293f8-7a7c-4557-84a5-d3d56c347685\") " pod="openshift-marketplace/redhat-marketplace-fln7m" Oct 08 14:35:37 crc kubenswrapper[4624]: I1008 14:35:37.132197 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fln7m" Oct 08 14:35:39 crc kubenswrapper[4624]: I1008 14:35:39.237352 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fln7m"] Oct 08 14:35:39 crc kubenswrapper[4624]: I1008 14:35:39.339461 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c" event={"ID":"38a79520-0ffe-4c51-8cba-117b74d7e7c8","Type":"ContainerStarted","Data":"2f7679e1ddbbce322ade1d1e6224b636da5e788a6c9a899d91e9463880685ba8"} Oct 08 14:35:39 crc kubenswrapper[4624]: I1008 14:35:39.339614 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c" Oct 08 14:35:39 crc kubenswrapper[4624]: I1008 14:35:39.360716 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c" podStartSLOduration=2.2875218950000002 podStartE2EDuration="10.360699559s" podCreationTimestamp="2025-10-08 14:35:29 +0000 UTC" firstStartedPulling="2025-10-08 14:35:30.609689685 +0000 UTC m=+755.760624762" lastFinishedPulling="2025-10-08 14:35:38.682867349 +0000 UTC m=+763.833802426" observedRunningTime="2025-10-08 14:35:39.358945154 +0000 UTC m=+764.509880231" watchObservedRunningTime="2025-10-08 14:35:39.360699559 +0000 UTC m=+764.511634636" Oct 08 14:35:40 crc kubenswrapper[4624]: I1008 14:35:40.346475 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5" event={"ID":"b8dd1cb9-7393-4033-abac-22f7aafa0235","Type":"ContainerStarted","Data":"82393cda6a20e6701b8836fecd61712421ac98098f987d8f1eb6e95d5c7d1b3e"} Oct 08 14:35:40 crc kubenswrapper[4624]: I1008 14:35:40.347362 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5" Oct 08 14:35:40 crc kubenswrapper[4624]: I1008 14:35:40.349130 4624 generic.go:334] "Generic (PLEG): container finished" podID="6b1293f8-7a7c-4557-84a5-d3d56c347685" containerID="a16818c237d06d1948dfc955c26171ac4f86529b0f4342d79c2962d0d48919b5" exitCode=0 Oct 08 14:35:40 crc kubenswrapper[4624]: I1008 14:35:40.349209 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fln7m" event={"ID":"6b1293f8-7a7c-4557-84a5-d3d56c347685","Type":"ContainerDied","Data":"a16818c237d06d1948dfc955c26171ac4f86529b0f4342d79c2962d0d48919b5"} Oct 08 14:35:40 crc kubenswrapper[4624]: I1008 14:35:40.349301 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fln7m" event={"ID":"6b1293f8-7a7c-4557-84a5-d3d56c347685","Type":"ContainerStarted","Data":"15df6aeedc3bb1fd140789fe3898739019d4b13bc8cf5e29ea95c53f92931c88"} Oct 08 14:35:40 crc kubenswrapper[4624]: I1008 14:35:40.366416 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5" podStartSLOduration=2.13086253 podStartE2EDuration="10.366400929s" podCreationTimestamp="2025-10-08 14:35:30 +0000 UTC" firstStartedPulling="2025-10-08 14:35:31.253485908 +0000 UTC m=+756.404420985" lastFinishedPulling="2025-10-08 14:35:39.489024307 +0000 UTC m=+764.639959384" observedRunningTime="2025-10-08 14:35:40.364075489 +0000 UTC m=+765.515010566" watchObservedRunningTime="2025-10-08 14:35:40.366400929 +0000 UTC m=+765.517336006" Oct 08 14:35:43 crc kubenswrapper[4624]: I1008 14:35:43.365698 4624 generic.go:334] "Generic (PLEG): container finished" podID="6b1293f8-7a7c-4557-84a5-d3d56c347685" containerID="d24721dc95f6826942d99ab841bd99eae3628cd08f8c3c2b08b95385f7bf9e56" exitCode=0 Oct 08 14:35:43 crc kubenswrapper[4624]: I1008 14:35:43.367490 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fln7m" event={"ID":"6b1293f8-7a7c-4557-84a5-d3d56c347685","Type":"ContainerDied","Data":"d24721dc95f6826942d99ab841bd99eae3628cd08f8c3c2b08b95385f7bf9e56"} Oct 08 14:35:45 crc kubenswrapper[4624]: I1008 14:35:45.377801 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fln7m" event={"ID":"6b1293f8-7a7c-4557-84a5-d3d56c347685","Type":"ContainerStarted","Data":"b93f8469bf57af11030acf70551bf79569a3cafa125f16dacbdbcf4be70976e1"} Oct 08 14:35:45 crc kubenswrapper[4624]: I1008 14:35:45.395427 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fln7m" podStartSLOduration=5.48467432 podStartE2EDuration="9.395407133s" podCreationTimestamp="2025-10-08 14:35:36 +0000 UTC" firstStartedPulling="2025-10-08 14:35:40.351165636 +0000 UTC m=+765.502100713" lastFinishedPulling="2025-10-08 14:35:44.261898449 +0000 UTC m=+769.412833526" observedRunningTime="2025-10-08 14:35:45.392683763 +0000 UTC m=+770.543618840" watchObservedRunningTime="2025-10-08 14:35:45.395407133 +0000 UTC m=+770.546342210" Oct 08 14:35:47 crc kubenswrapper[4624]: I1008 14:35:47.133683 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fln7m" Oct 08 14:35:47 crc kubenswrapper[4624]: I1008 14:35:47.135660 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fln7m" Oct 08 14:35:47 crc kubenswrapper[4624]: I1008 14:35:47.176149 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fln7m" Oct 08 14:35:49 crc kubenswrapper[4624]: I1008 14:35:49.198362 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-scrbz"] Oct 08 14:35:49 crc kubenswrapper[4624]: I1008 14:35:49.199492 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-scrbz" Oct 08 14:35:49 crc kubenswrapper[4624]: I1008 14:35:49.261959 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frqwb\" (UniqueName: \"kubernetes.io/projected/9335d366-9845-44a5-9876-65f2a4d0592d-kube-api-access-frqwb\") pod \"community-operators-scrbz\" (UID: \"9335d366-9845-44a5-9876-65f2a4d0592d\") " pod="openshift-marketplace/community-operators-scrbz" Oct 08 14:35:49 crc kubenswrapper[4624]: I1008 14:35:49.262028 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9335d366-9845-44a5-9876-65f2a4d0592d-catalog-content\") pod \"community-operators-scrbz\" (UID: \"9335d366-9845-44a5-9876-65f2a4d0592d\") " pod="openshift-marketplace/community-operators-scrbz" Oct 08 14:35:49 crc kubenswrapper[4624]: I1008 14:35:49.262101 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9335d366-9845-44a5-9876-65f2a4d0592d-utilities\") pod \"community-operators-scrbz\" (UID: \"9335d366-9845-44a5-9876-65f2a4d0592d\") " pod="openshift-marketplace/community-operators-scrbz" Oct 08 14:35:49 crc kubenswrapper[4624]: I1008 14:35:49.266824 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-scrbz"] Oct 08 14:35:49 crc kubenswrapper[4624]: I1008 14:35:49.363133 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9335d366-9845-44a5-9876-65f2a4d0592d-utilities\") pod \"community-operators-scrbz\" (UID: \"9335d366-9845-44a5-9876-65f2a4d0592d\") " pod="openshift-marketplace/community-operators-scrbz" Oct 08 14:35:49 crc kubenswrapper[4624]: I1008 14:35:49.363195 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frqwb\" (UniqueName: \"kubernetes.io/projected/9335d366-9845-44a5-9876-65f2a4d0592d-kube-api-access-frqwb\") pod \"community-operators-scrbz\" (UID: \"9335d366-9845-44a5-9876-65f2a4d0592d\") " pod="openshift-marketplace/community-operators-scrbz" Oct 08 14:35:49 crc kubenswrapper[4624]: I1008 14:35:49.363232 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9335d366-9845-44a5-9876-65f2a4d0592d-catalog-content\") pod \"community-operators-scrbz\" (UID: \"9335d366-9845-44a5-9876-65f2a4d0592d\") " pod="openshift-marketplace/community-operators-scrbz" Oct 08 14:35:49 crc kubenswrapper[4624]: I1008 14:35:49.363976 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9335d366-9845-44a5-9876-65f2a4d0592d-utilities\") pod \"community-operators-scrbz\" (UID: \"9335d366-9845-44a5-9876-65f2a4d0592d\") " pod="openshift-marketplace/community-operators-scrbz" Oct 08 14:35:49 crc kubenswrapper[4624]: I1008 14:35:49.363991 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9335d366-9845-44a5-9876-65f2a4d0592d-catalog-content\") pod \"community-operators-scrbz\" (UID: \"9335d366-9845-44a5-9876-65f2a4d0592d\") " pod="openshift-marketplace/community-operators-scrbz" Oct 08 14:35:49 crc kubenswrapper[4624]: I1008 14:35:49.396176 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frqwb\" (UniqueName: \"kubernetes.io/projected/9335d366-9845-44a5-9876-65f2a4d0592d-kube-api-access-frqwb\") pod \"community-operators-scrbz\" (UID: \"9335d366-9845-44a5-9876-65f2a4d0592d\") " pod="openshift-marketplace/community-operators-scrbz" Oct 08 14:35:49 crc kubenswrapper[4624]: I1008 14:35:49.518548 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-scrbz" Oct 08 14:35:50 crc kubenswrapper[4624]: I1008 14:35:50.085470 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-scrbz"] Oct 08 14:35:50 crc kubenswrapper[4624]: W1008 14:35:50.095506 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9335d366_9845_44a5_9876_65f2a4d0592d.slice/crio-b34dc234bd90ad5244503c60cfa26715fffaf97b904888bb17a9b85b8dd76c19 WatchSource:0}: Error finding container b34dc234bd90ad5244503c60cfa26715fffaf97b904888bb17a9b85b8dd76c19: Status 404 returned error can't find the container with id b34dc234bd90ad5244503c60cfa26715fffaf97b904888bb17a9b85b8dd76c19 Oct 08 14:35:50 crc kubenswrapper[4624]: I1008 14:35:50.406680 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scrbz" event={"ID":"9335d366-9845-44a5-9876-65f2a4d0592d","Type":"ContainerStarted","Data":"2e0860c1f2bd826464454bbf72fd60f240cbb7d64a8220a6ef92a280e365e83c"} Oct 08 14:35:50 crc kubenswrapper[4624]: I1008 14:35:50.406725 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scrbz" event={"ID":"9335d366-9845-44a5-9876-65f2a4d0592d","Type":"ContainerStarted","Data":"b34dc234bd90ad5244503c60cfa26715fffaf97b904888bb17a9b85b8dd76c19"} Oct 08 14:35:50 crc kubenswrapper[4624]: I1008 14:35:50.748017 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7df686497b-8bpg5" Oct 08 14:35:51 crc kubenswrapper[4624]: I1008 14:35:51.413796 4624 generic.go:334] "Generic (PLEG): container finished" podID="9335d366-9845-44a5-9876-65f2a4d0592d" containerID="2e0860c1f2bd826464454bbf72fd60f240cbb7d64a8220a6ef92a280e365e83c" exitCode=0 Oct 08 14:35:51 crc kubenswrapper[4624]: I1008 14:35:51.413900 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scrbz" event={"ID":"9335d366-9845-44a5-9876-65f2a4d0592d","Type":"ContainerDied","Data":"2e0860c1f2bd826464454bbf72fd60f240cbb7d64a8220a6ef92a280e365e83c"} Oct 08 14:35:55 crc kubenswrapper[4624]: I1008 14:35:55.437130 4624 generic.go:334] "Generic (PLEG): container finished" podID="9335d366-9845-44a5-9876-65f2a4d0592d" containerID="6d51a3567be86e34756bf52ab703886d61d483a2e67926313a125a3fe3a1b3c2" exitCode=0 Oct 08 14:35:55 crc kubenswrapper[4624]: I1008 14:35:55.437466 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scrbz" event={"ID":"9335d366-9845-44a5-9876-65f2a4d0592d","Type":"ContainerDied","Data":"6d51a3567be86e34756bf52ab703886d61d483a2e67926313a125a3fe3a1b3c2"} Oct 08 14:35:57 crc kubenswrapper[4624]: I1008 14:35:57.187728 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fln7m" Oct 08 14:35:57 crc kubenswrapper[4624]: I1008 14:35:57.450997 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scrbz" event={"ID":"9335d366-9845-44a5-9876-65f2a4d0592d","Type":"ContainerStarted","Data":"4ef53b8652f1ff1ce820c78f28fbbb27573b876079eca040fd3d1640be46f676"} Oct 08 14:35:57 crc kubenswrapper[4624]: I1008 14:35:57.473813 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-scrbz" podStartSLOduration=3.536454469 podStartE2EDuration="8.473800583s" podCreationTimestamp="2025-10-08 14:35:49 +0000 UTC" firstStartedPulling="2025-10-08 14:35:51.415763468 +0000 UTC m=+776.566698545" lastFinishedPulling="2025-10-08 14:35:56.353109582 +0000 UTC m=+781.504044659" observedRunningTime="2025-10-08 14:35:57.472493489 +0000 UTC m=+782.623428566" watchObservedRunningTime="2025-10-08 14:35:57.473800583 +0000 UTC m=+782.624735660" Oct 08 14:35:59 crc kubenswrapper[4624]: I1008 14:35:59.520133 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-scrbz" Oct 08 14:35:59 crc kubenswrapper[4624]: I1008 14:35:59.520919 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-scrbz" Oct 08 14:35:59 crc kubenswrapper[4624]: I1008 14:35:59.559486 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-scrbz" Oct 08 14:36:00 crc kubenswrapper[4624]: I1008 14:36:00.076612 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:36:00 crc kubenswrapper[4624]: I1008 14:36:00.076745 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:36:00 crc kubenswrapper[4624]: I1008 14:36:00.076800 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:36:00 crc kubenswrapper[4624]: I1008 14:36:00.077520 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"690587e93a403cfefc9b225df6e738e9c67a31458ed49cde23474722023af49a"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:36:00 crc kubenswrapper[4624]: I1008 14:36:00.077592 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://690587e93a403cfefc9b225df6e738e9c67a31458ed49cde23474722023af49a" gracePeriod=600 Oct 08 14:36:00 crc kubenswrapper[4624]: I1008 14:36:00.479445 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="690587e93a403cfefc9b225df6e738e9c67a31458ed49cde23474722023af49a" exitCode=0 Oct 08 14:36:00 crc kubenswrapper[4624]: I1008 14:36:00.479517 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"690587e93a403cfefc9b225df6e738e9c67a31458ed49cde23474722023af49a"} Oct 08 14:36:00 crc kubenswrapper[4624]: I1008 14:36:00.480211 4624 scope.go:117] "RemoveContainer" containerID="1598da519a0e8e48b943e41d794035b7f04282326fb7b59d821c571033b04a9d" Oct 08 14:36:00 crc kubenswrapper[4624]: I1008 14:36:00.789644 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fln7m"] Oct 08 14:36:00 crc kubenswrapper[4624]: I1008 14:36:00.789861 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fln7m" podUID="6b1293f8-7a7c-4557-84a5-d3d56c347685" containerName="registry-server" containerID="cri-o://b93f8469bf57af11030acf70551bf79569a3cafa125f16dacbdbcf4be70976e1" gracePeriod=2 Oct 08 14:36:01 crc kubenswrapper[4624]: I1008 14:36:01.486671 4624 generic.go:334] "Generic (PLEG): container finished" podID="6b1293f8-7a7c-4557-84a5-d3d56c347685" containerID="b93f8469bf57af11030acf70551bf79569a3cafa125f16dacbdbcf4be70976e1" exitCode=0 Oct 08 14:36:01 crc kubenswrapper[4624]: I1008 14:36:01.486728 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fln7m" event={"ID":"6b1293f8-7a7c-4557-84a5-d3d56c347685","Type":"ContainerDied","Data":"b93f8469bf57af11030acf70551bf79569a3cafa125f16dacbdbcf4be70976e1"} Oct 08 14:36:01 crc kubenswrapper[4624]: I1008 14:36:01.489472 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"f4f136ef4518c5e931bf27ca2c72cf8717267538a0000a0bb2738bfc1c0e8cda"} Oct 08 14:36:01 crc kubenswrapper[4624]: I1008 14:36:01.810657 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fln7m" Oct 08 14:36:01 crc kubenswrapper[4624]: I1008 14:36:01.919247 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1293f8-7a7c-4557-84a5-d3d56c347685-catalog-content\") pod \"6b1293f8-7a7c-4557-84a5-d3d56c347685\" (UID: \"6b1293f8-7a7c-4557-84a5-d3d56c347685\") " Oct 08 14:36:01 crc kubenswrapper[4624]: I1008 14:36:01.919309 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1293f8-7a7c-4557-84a5-d3d56c347685-utilities\") pod \"6b1293f8-7a7c-4557-84a5-d3d56c347685\" (UID: \"6b1293f8-7a7c-4557-84a5-d3d56c347685\") " Oct 08 14:36:01 crc kubenswrapper[4624]: I1008 14:36:01.919389 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk9rf\" (UniqueName: \"kubernetes.io/projected/6b1293f8-7a7c-4557-84a5-d3d56c347685-kube-api-access-gk9rf\") pod \"6b1293f8-7a7c-4557-84a5-d3d56c347685\" (UID: \"6b1293f8-7a7c-4557-84a5-d3d56c347685\") " Oct 08 14:36:01 crc kubenswrapper[4624]: I1008 14:36:01.920466 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b1293f8-7a7c-4557-84a5-d3d56c347685-utilities" (OuterVolumeSpecName: "utilities") pod "6b1293f8-7a7c-4557-84a5-d3d56c347685" (UID: "6b1293f8-7a7c-4557-84a5-d3d56c347685"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:36:01 crc kubenswrapper[4624]: I1008 14:36:01.928930 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b1293f8-7a7c-4557-84a5-d3d56c347685-kube-api-access-gk9rf" (OuterVolumeSpecName: "kube-api-access-gk9rf") pod "6b1293f8-7a7c-4557-84a5-d3d56c347685" (UID: "6b1293f8-7a7c-4557-84a5-d3d56c347685"). InnerVolumeSpecName "kube-api-access-gk9rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:36:01 crc kubenswrapper[4624]: I1008 14:36:01.934068 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b1293f8-7a7c-4557-84a5-d3d56c347685-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b1293f8-7a7c-4557-84a5-d3d56c347685" (UID: "6b1293f8-7a7c-4557-84a5-d3d56c347685"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:36:02 crc kubenswrapper[4624]: I1008 14:36:02.020940 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1293f8-7a7c-4557-84a5-d3d56c347685-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:36:02 crc kubenswrapper[4624]: I1008 14:36:02.020979 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk9rf\" (UniqueName: \"kubernetes.io/projected/6b1293f8-7a7c-4557-84a5-d3d56c347685-kube-api-access-gk9rf\") on node \"crc\" DevicePath \"\"" Oct 08 14:36:02 crc kubenswrapper[4624]: I1008 14:36:02.020994 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1293f8-7a7c-4557-84a5-d3d56c347685-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:36:02 crc kubenswrapper[4624]: I1008 14:36:02.498918 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fln7m" Oct 08 14:36:02 crc kubenswrapper[4624]: I1008 14:36:02.498892 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fln7m" event={"ID":"6b1293f8-7a7c-4557-84a5-d3d56c347685","Type":"ContainerDied","Data":"15df6aeedc3bb1fd140789fe3898739019d4b13bc8cf5e29ea95c53f92931c88"} Oct 08 14:36:02 crc kubenswrapper[4624]: I1008 14:36:02.498988 4624 scope.go:117] "RemoveContainer" containerID="b93f8469bf57af11030acf70551bf79569a3cafa125f16dacbdbcf4be70976e1" Oct 08 14:36:02 crc kubenswrapper[4624]: I1008 14:36:02.515935 4624 scope.go:117] "RemoveContainer" containerID="d24721dc95f6826942d99ab841bd99eae3628cd08f8c3c2b08b95385f7bf9e56" Oct 08 14:36:02 crc kubenswrapper[4624]: I1008 14:36:02.531259 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fln7m"] Oct 08 14:36:02 crc kubenswrapper[4624]: I1008 14:36:02.536646 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fln7m"] Oct 08 14:36:02 crc kubenswrapper[4624]: I1008 14:36:02.545523 4624 scope.go:117] "RemoveContainer" containerID="a16818c237d06d1948dfc955c26171ac4f86529b0f4342d79c2962d0d48919b5" Oct 08 14:36:03 crc kubenswrapper[4624]: I1008 14:36:03.471805 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b1293f8-7a7c-4557-84a5-d3d56c347685" path="/var/lib/kubelet/pods/6b1293f8-7a7c-4557-84a5-d3d56c347685/volumes" Oct 08 14:36:09 crc kubenswrapper[4624]: I1008 14:36:09.588873 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-scrbz" Oct 08 14:36:10 crc kubenswrapper[4624]: I1008 14:36:10.232442 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-bb6fb947b-d4j7c" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.054656 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-j8nx2"] Oct 08 14:36:11 crc kubenswrapper[4624]: E1008 14:36:11.054923 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b1293f8-7a7c-4557-84a5-d3d56c347685" containerName="extract-utilities" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.054939 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b1293f8-7a7c-4557-84a5-d3d56c347685" containerName="extract-utilities" Oct 08 14:36:11 crc kubenswrapper[4624]: E1008 14:36:11.054953 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b1293f8-7a7c-4557-84a5-d3d56c347685" containerName="extract-content" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.054960 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b1293f8-7a7c-4557-84a5-d3d56c347685" containerName="extract-content" Oct 08 14:36:11 crc kubenswrapper[4624]: E1008 14:36:11.054971 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b1293f8-7a7c-4557-84a5-d3d56c347685" containerName="registry-server" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.054977 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b1293f8-7a7c-4557-84a5-d3d56c347685" containerName="registry-server" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.055099 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b1293f8-7a7c-4557-84a5-d3d56c347685" containerName="registry-server" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.057074 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.060736 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.072204 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.074321 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-hddps" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.077382 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-bf8cm"] Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.078717 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-bf8cm" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.080225 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.087376 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-bf8cm"] Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.190086 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-vs5mh"] Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.191204 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-vs5mh" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.194829 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.195148 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.195260 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-hrwv2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.195749 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.223044 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-rm4d2"] Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.224044 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-rm4d2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.228303 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c45zg\" (UniqueName: \"kubernetes.io/projected/34800d6c-2287-427a-93c1-b227a3e4734d-kube-api-access-c45zg\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.228522 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/25542429-7dd5-4d22-a273-38386ed868ac-memberlist\") pod \"speaker-vs5mh\" (UID: \"25542429-7dd5-4d22-a273-38386ed868ac\") " pod="metallb-system/speaker-vs5mh" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.228760 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/34800d6c-2287-427a-93c1-b227a3e4734d-frr-sockets\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.228856 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/34800d6c-2287-427a-93c1-b227a3e4734d-reloader\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.228927 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p5tm\" (UniqueName: \"kubernetes.io/projected/25542429-7dd5-4d22-a273-38386ed868ac-kube-api-access-6p5tm\") pod \"speaker-vs5mh\" (UID: \"25542429-7dd5-4d22-a273-38386ed868ac\") " pod="metallb-system/speaker-vs5mh" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.229002 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/34800d6c-2287-427a-93c1-b227a3e4734d-frr-conf\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.229081 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/34800d6c-2287-427a-93c1-b227a3e4734d-frr-startup\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.229172 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43a3eaca-a2c8-4508-996d-b6c977997ea7-cert\") pod \"controller-68d546b9d8-rm4d2\" (UID: \"43a3eaca-a2c8-4508-996d-b6c977997ea7\") " pod="metallb-system/controller-68d546b9d8-rm4d2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.229257 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmq7r\" (UniqueName: \"kubernetes.io/projected/e36921e5-1d1e-420d-9dc1-f6aaad1bf904-kube-api-access-nmq7r\") pod \"frr-k8s-webhook-server-64bf5d555-bf8cm\" (UID: \"e36921e5-1d1e-420d-9dc1-f6aaad1bf904\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-bf8cm" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.229322 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/34800d6c-2287-427a-93c1-b227a3e4734d-metrics\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.229394 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/25542429-7dd5-4d22-a273-38386ed868ac-metallb-excludel2\") pod \"speaker-vs5mh\" (UID: \"25542429-7dd5-4d22-a273-38386ed868ac\") " pod="metallb-system/speaker-vs5mh" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.229463 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34800d6c-2287-427a-93c1-b227a3e4734d-metrics-certs\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.229548 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43a3eaca-a2c8-4508-996d-b6c977997ea7-metrics-certs\") pod \"controller-68d546b9d8-rm4d2\" (UID: \"43a3eaca-a2c8-4508-996d-b6c977997ea7\") " pod="metallb-system/controller-68d546b9d8-rm4d2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.229646 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e36921e5-1d1e-420d-9dc1-f6aaad1bf904-cert\") pod \"frr-k8s-webhook-server-64bf5d555-bf8cm\" (UID: \"e36921e5-1d1e-420d-9dc1-f6aaad1bf904\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-bf8cm" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.229738 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/25542429-7dd5-4d22-a273-38386ed868ac-metrics-certs\") pod \"speaker-vs5mh\" (UID: \"25542429-7dd5-4d22-a273-38386ed868ac\") " pod="metallb-system/speaker-vs5mh" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.229811 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qsk2\" (UniqueName: \"kubernetes.io/projected/43a3eaca-a2c8-4508-996d-b6c977997ea7-kube-api-access-5qsk2\") pod \"controller-68d546b9d8-rm4d2\" (UID: \"43a3eaca-a2c8-4508-996d-b6c977997ea7\") " pod="metallb-system/controller-68d546b9d8-rm4d2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.234703 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.246771 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-rm4d2"] Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.330623 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43a3eaca-a2c8-4508-996d-b6c977997ea7-cert\") pod \"controller-68d546b9d8-rm4d2\" (UID: \"43a3eaca-a2c8-4508-996d-b6c977997ea7\") " pod="metallb-system/controller-68d546b9d8-rm4d2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.330720 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmq7r\" (UniqueName: \"kubernetes.io/projected/e36921e5-1d1e-420d-9dc1-f6aaad1bf904-kube-api-access-nmq7r\") pod \"frr-k8s-webhook-server-64bf5d555-bf8cm\" (UID: \"e36921e5-1d1e-420d-9dc1-f6aaad1bf904\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-bf8cm" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.330749 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/34800d6c-2287-427a-93c1-b227a3e4734d-metrics\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.330775 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/25542429-7dd5-4d22-a273-38386ed868ac-metallb-excludel2\") pod \"speaker-vs5mh\" (UID: \"25542429-7dd5-4d22-a273-38386ed868ac\") " pod="metallb-system/speaker-vs5mh" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.330801 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34800d6c-2287-427a-93c1-b227a3e4734d-metrics-certs\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.330834 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43a3eaca-a2c8-4508-996d-b6c977997ea7-metrics-certs\") pod \"controller-68d546b9d8-rm4d2\" (UID: \"43a3eaca-a2c8-4508-996d-b6c977997ea7\") " pod="metallb-system/controller-68d546b9d8-rm4d2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.330860 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e36921e5-1d1e-420d-9dc1-f6aaad1bf904-cert\") pod \"frr-k8s-webhook-server-64bf5d555-bf8cm\" (UID: \"e36921e5-1d1e-420d-9dc1-f6aaad1bf904\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-bf8cm" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.330888 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/25542429-7dd5-4d22-a273-38386ed868ac-metrics-certs\") pod \"speaker-vs5mh\" (UID: \"25542429-7dd5-4d22-a273-38386ed868ac\") " pod="metallb-system/speaker-vs5mh" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.330909 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qsk2\" (UniqueName: \"kubernetes.io/projected/43a3eaca-a2c8-4508-996d-b6c977997ea7-kube-api-access-5qsk2\") pod \"controller-68d546b9d8-rm4d2\" (UID: \"43a3eaca-a2c8-4508-996d-b6c977997ea7\") " pod="metallb-system/controller-68d546b9d8-rm4d2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.330933 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c45zg\" (UniqueName: \"kubernetes.io/projected/34800d6c-2287-427a-93c1-b227a3e4734d-kube-api-access-c45zg\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.330954 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/25542429-7dd5-4d22-a273-38386ed868ac-memberlist\") pod \"speaker-vs5mh\" (UID: \"25542429-7dd5-4d22-a273-38386ed868ac\") " pod="metallb-system/speaker-vs5mh" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.331018 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/34800d6c-2287-427a-93c1-b227a3e4734d-frr-sockets\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.331042 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/34800d6c-2287-427a-93c1-b227a3e4734d-reloader\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.331069 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p5tm\" (UniqueName: \"kubernetes.io/projected/25542429-7dd5-4d22-a273-38386ed868ac-kube-api-access-6p5tm\") pod \"speaker-vs5mh\" (UID: \"25542429-7dd5-4d22-a273-38386ed868ac\") " pod="metallb-system/speaker-vs5mh" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.331089 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/34800d6c-2287-427a-93c1-b227a3e4734d-frr-conf\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.331111 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/34800d6c-2287-427a-93c1-b227a3e4734d-frr-startup\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.332091 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/34800d6c-2287-427a-93c1-b227a3e4734d-frr-startup\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.332614 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/34800d6c-2287-427a-93c1-b227a3e4734d-metrics\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.333085 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/25542429-7dd5-4d22-a273-38386ed868ac-metallb-excludel2\") pod \"speaker-vs5mh\" (UID: \"25542429-7dd5-4d22-a273-38386ed868ac\") " pod="metallb-system/speaker-vs5mh" Oct 08 14:36:11 crc kubenswrapper[4624]: E1008 14:36:11.333956 4624 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 08 14:36:11 crc kubenswrapper[4624]: E1008 14:36:11.334036 4624 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Oct 08 14:36:11 crc kubenswrapper[4624]: E1008 14:36:11.334094 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25542429-7dd5-4d22-a273-38386ed868ac-memberlist podName:25542429-7dd5-4d22-a273-38386ed868ac nodeName:}" failed. No retries permitted until 2025-10-08 14:36:11.834045358 +0000 UTC m=+796.984980535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/25542429-7dd5-4d22-a273-38386ed868ac-memberlist") pod "speaker-vs5mh" (UID: "25542429-7dd5-4d22-a273-38386ed868ac") : secret "metallb-memberlist" not found Oct 08 14:36:11 crc kubenswrapper[4624]: E1008 14:36:11.334137 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43a3eaca-a2c8-4508-996d-b6c977997ea7-metrics-certs podName:43a3eaca-a2c8-4508-996d-b6c977997ea7 nodeName:}" failed. No retries permitted until 2025-10-08 14:36:11.83412464 +0000 UTC m=+796.985059857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/43a3eaca-a2c8-4508-996d-b6c977997ea7-metrics-certs") pod "controller-68d546b9d8-rm4d2" (UID: "43a3eaca-a2c8-4508-996d-b6c977997ea7") : secret "controller-certs-secret" not found Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.334266 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/34800d6c-2287-427a-93c1-b227a3e4734d-frr-conf\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.334378 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/34800d6c-2287-427a-93c1-b227a3e4734d-frr-sockets\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.334747 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/34800d6c-2287-427a-93c1-b227a3e4734d-reloader\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.339225 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.344355 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34800d6c-2287-427a-93c1-b227a3e4734d-metrics-certs\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.345084 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/25542429-7dd5-4d22-a273-38386ed868ac-metrics-certs\") pod \"speaker-vs5mh\" (UID: \"25542429-7dd5-4d22-a273-38386ed868ac\") " pod="metallb-system/speaker-vs5mh" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.345424 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43a3eaca-a2c8-4508-996d-b6c977997ea7-cert\") pod \"controller-68d546b9d8-rm4d2\" (UID: \"43a3eaca-a2c8-4508-996d-b6c977997ea7\") " pod="metallb-system/controller-68d546b9d8-rm4d2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.355289 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e36921e5-1d1e-420d-9dc1-f6aaad1bf904-cert\") pod \"frr-k8s-webhook-server-64bf5d555-bf8cm\" (UID: \"e36921e5-1d1e-420d-9dc1-f6aaad1bf904\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-bf8cm" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.361369 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmq7r\" (UniqueName: \"kubernetes.io/projected/e36921e5-1d1e-420d-9dc1-f6aaad1bf904-kube-api-access-nmq7r\") pod \"frr-k8s-webhook-server-64bf5d555-bf8cm\" (UID: \"e36921e5-1d1e-420d-9dc1-f6aaad1bf904\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-bf8cm" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.362379 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p5tm\" (UniqueName: \"kubernetes.io/projected/25542429-7dd5-4d22-a273-38386ed868ac-kube-api-access-6p5tm\") pod \"speaker-vs5mh\" (UID: \"25542429-7dd5-4d22-a273-38386ed868ac\") " pod="metallb-system/speaker-vs5mh" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.363492 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qsk2\" (UniqueName: \"kubernetes.io/projected/43a3eaca-a2c8-4508-996d-b6c977997ea7-kube-api-access-5qsk2\") pod \"controller-68d546b9d8-rm4d2\" (UID: \"43a3eaca-a2c8-4508-996d-b6c977997ea7\") " pod="metallb-system/controller-68d546b9d8-rm4d2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.365881 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c45zg\" (UniqueName: \"kubernetes.io/projected/34800d6c-2287-427a-93c1-b227a3e4734d-kube-api-access-c45zg\") pod \"frr-k8s-j8nx2\" (UID: \"34800d6c-2287-427a-93c1-b227a3e4734d\") " pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.372632 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.393662 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-bf8cm" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.842584 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/25542429-7dd5-4d22-a273-38386ed868ac-memberlist\") pod \"speaker-vs5mh\" (UID: \"25542429-7dd5-4d22-a273-38386ed868ac\") " pod="metallb-system/speaker-vs5mh" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.843079 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43a3eaca-a2c8-4508-996d-b6c977997ea7-metrics-certs\") pod \"controller-68d546b9d8-rm4d2\" (UID: \"43a3eaca-a2c8-4508-996d-b6c977997ea7\") " pod="metallb-system/controller-68d546b9d8-rm4d2" Oct 08 14:36:11 crc kubenswrapper[4624]: E1008 14:36:11.842802 4624 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 08 14:36:11 crc kubenswrapper[4624]: E1008 14:36:11.843191 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25542429-7dd5-4d22-a273-38386ed868ac-memberlist podName:25542429-7dd5-4d22-a273-38386ed868ac nodeName:}" failed. No retries permitted until 2025-10-08 14:36:12.843168162 +0000 UTC m=+797.994103319 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/25542429-7dd5-4d22-a273-38386ed868ac-memberlist") pod "speaker-vs5mh" (UID: "25542429-7dd5-4d22-a273-38386ed868ac") : secret "metallb-memberlist" not found Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.846901 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43a3eaca-a2c8-4508-996d-b6c977997ea7-metrics-certs\") pod \"controller-68d546b9d8-rm4d2\" (UID: \"43a3eaca-a2c8-4508-996d-b6c977997ea7\") " pod="metallb-system/controller-68d546b9d8-rm4d2" Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.860152 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-bf8cm"] Oct 08 14:36:11 crc kubenswrapper[4624]: W1008 14:36:11.866104 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode36921e5_1d1e_420d_9dc1_f6aaad1bf904.slice/crio-6d8d81b0626b841260ec7688a4edf82826a692632ba3f89e5618521c80799289 WatchSource:0}: Error finding container 6d8d81b0626b841260ec7688a4edf82826a692632ba3f89e5618521c80799289: Status 404 returned error can't find the container with id 6d8d81b0626b841260ec7688a4edf82826a692632ba3f89e5618521c80799289 Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.992823 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-scrbz"] Oct 08 14:36:11 crc kubenswrapper[4624]: I1008 14:36:11.993043 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-scrbz" podUID="9335d366-9845-44a5-9876-65f2a4d0592d" containerName="registry-server" containerID="cri-o://4ef53b8652f1ff1ce820c78f28fbbb27573b876079eca040fd3d1640be46f676" gracePeriod=2 Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.138106 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-rm4d2" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.458904 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-scrbz" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.553681 4624 generic.go:334] "Generic (PLEG): container finished" podID="9335d366-9845-44a5-9876-65f2a4d0592d" containerID="4ef53b8652f1ff1ce820c78f28fbbb27573b876079eca040fd3d1640be46f676" exitCode=0 Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.553752 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-scrbz" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.553850 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scrbz" event={"ID":"9335d366-9845-44a5-9876-65f2a4d0592d","Type":"ContainerDied","Data":"4ef53b8652f1ff1ce820c78f28fbbb27573b876079eca040fd3d1640be46f676"} Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.553894 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scrbz" event={"ID":"9335d366-9845-44a5-9876-65f2a4d0592d","Type":"ContainerDied","Data":"b34dc234bd90ad5244503c60cfa26715fffaf97b904888bb17a9b85b8dd76c19"} Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.553914 4624 scope.go:117] "RemoveContainer" containerID="4ef53b8652f1ff1ce820c78f28fbbb27573b876079eca040fd3d1640be46f676" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.554760 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-bf8cm" event={"ID":"e36921e5-1d1e-420d-9dc1-f6aaad1bf904","Type":"ContainerStarted","Data":"6d8d81b0626b841260ec7688a4edf82826a692632ba3f89e5618521c80799289"} Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.555295 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9335d366-9845-44a5-9876-65f2a4d0592d-catalog-content\") pod \"9335d366-9845-44a5-9876-65f2a4d0592d\" (UID: \"9335d366-9845-44a5-9876-65f2a4d0592d\") " Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.555332 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frqwb\" (UniqueName: \"kubernetes.io/projected/9335d366-9845-44a5-9876-65f2a4d0592d-kube-api-access-frqwb\") pod \"9335d366-9845-44a5-9876-65f2a4d0592d\" (UID: \"9335d366-9845-44a5-9876-65f2a4d0592d\") " Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.555440 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9335d366-9845-44a5-9876-65f2a4d0592d-utilities\") pod \"9335d366-9845-44a5-9876-65f2a4d0592d\" (UID: \"9335d366-9845-44a5-9876-65f2a4d0592d\") " Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.556889 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j8nx2" event={"ID":"34800d6c-2287-427a-93c1-b227a3e4734d","Type":"ContainerStarted","Data":"2e2e57e9875f35d7acbb6bab7a1100db30d6de33fea1b651bdf4d1755c9de5a0"} Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.558038 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9335d366-9845-44a5-9876-65f2a4d0592d-utilities" (OuterVolumeSpecName: "utilities") pod "9335d366-9845-44a5-9876-65f2a4d0592d" (UID: "9335d366-9845-44a5-9876-65f2a4d0592d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.560749 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9335d366-9845-44a5-9876-65f2a4d0592d-kube-api-access-frqwb" (OuterVolumeSpecName: "kube-api-access-frqwb") pod "9335d366-9845-44a5-9876-65f2a4d0592d" (UID: "9335d366-9845-44a5-9876-65f2a4d0592d"). InnerVolumeSpecName "kube-api-access-frqwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.594509 4624 scope.go:117] "RemoveContainer" containerID="6d51a3567be86e34756bf52ab703886d61d483a2e67926313a125a3fe3a1b3c2" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.596031 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-rm4d2"] Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.615120 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9335d366-9845-44a5-9876-65f2a4d0592d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9335d366-9845-44a5-9876-65f2a4d0592d" (UID: "9335d366-9845-44a5-9876-65f2a4d0592d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.629998 4624 scope.go:117] "RemoveContainer" containerID="2e0860c1f2bd826464454bbf72fd60f240cbb7d64a8220a6ef92a280e365e83c" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.643334 4624 scope.go:117] "RemoveContainer" containerID="4ef53b8652f1ff1ce820c78f28fbbb27573b876079eca040fd3d1640be46f676" Oct 08 14:36:12 crc kubenswrapper[4624]: E1008 14:36:12.644279 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ef53b8652f1ff1ce820c78f28fbbb27573b876079eca040fd3d1640be46f676\": container with ID starting with 4ef53b8652f1ff1ce820c78f28fbbb27573b876079eca040fd3d1640be46f676 not found: ID does not exist" containerID="4ef53b8652f1ff1ce820c78f28fbbb27573b876079eca040fd3d1640be46f676" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.644550 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ef53b8652f1ff1ce820c78f28fbbb27573b876079eca040fd3d1640be46f676"} err="failed to get container status \"4ef53b8652f1ff1ce820c78f28fbbb27573b876079eca040fd3d1640be46f676\": rpc error: code = NotFound desc = could not find container \"4ef53b8652f1ff1ce820c78f28fbbb27573b876079eca040fd3d1640be46f676\": container with ID starting with 4ef53b8652f1ff1ce820c78f28fbbb27573b876079eca040fd3d1640be46f676 not found: ID does not exist" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.644580 4624 scope.go:117] "RemoveContainer" containerID="6d51a3567be86e34756bf52ab703886d61d483a2e67926313a125a3fe3a1b3c2" Oct 08 14:36:12 crc kubenswrapper[4624]: E1008 14:36:12.645143 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d51a3567be86e34756bf52ab703886d61d483a2e67926313a125a3fe3a1b3c2\": container with ID starting with 6d51a3567be86e34756bf52ab703886d61d483a2e67926313a125a3fe3a1b3c2 not found: ID does not exist" containerID="6d51a3567be86e34756bf52ab703886d61d483a2e67926313a125a3fe3a1b3c2" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.645172 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d51a3567be86e34756bf52ab703886d61d483a2e67926313a125a3fe3a1b3c2"} err="failed to get container status \"6d51a3567be86e34756bf52ab703886d61d483a2e67926313a125a3fe3a1b3c2\": rpc error: code = NotFound desc = could not find container \"6d51a3567be86e34756bf52ab703886d61d483a2e67926313a125a3fe3a1b3c2\": container with ID starting with 6d51a3567be86e34756bf52ab703886d61d483a2e67926313a125a3fe3a1b3c2 not found: ID does not exist" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.645191 4624 scope.go:117] "RemoveContainer" containerID="2e0860c1f2bd826464454bbf72fd60f240cbb7d64a8220a6ef92a280e365e83c" Oct 08 14:36:12 crc kubenswrapper[4624]: E1008 14:36:12.645817 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e0860c1f2bd826464454bbf72fd60f240cbb7d64a8220a6ef92a280e365e83c\": container with ID starting with 2e0860c1f2bd826464454bbf72fd60f240cbb7d64a8220a6ef92a280e365e83c not found: ID does not exist" containerID="2e0860c1f2bd826464454bbf72fd60f240cbb7d64a8220a6ef92a280e365e83c" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.645842 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e0860c1f2bd826464454bbf72fd60f240cbb7d64a8220a6ef92a280e365e83c"} err="failed to get container status \"2e0860c1f2bd826464454bbf72fd60f240cbb7d64a8220a6ef92a280e365e83c\": rpc error: code = NotFound desc = could not find container \"2e0860c1f2bd826464454bbf72fd60f240cbb7d64a8220a6ef92a280e365e83c\": container with ID starting with 2e0860c1f2bd826464454bbf72fd60f240cbb7d64a8220a6ef92a280e365e83c not found: ID does not exist" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.657998 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9335d366-9845-44a5-9876-65f2a4d0592d-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.658028 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9335d366-9845-44a5-9876-65f2a4d0592d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.658038 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frqwb\" (UniqueName: \"kubernetes.io/projected/9335d366-9845-44a5-9876-65f2a4d0592d-kube-api-access-frqwb\") on node \"crc\" DevicePath \"\"" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.860985 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/25542429-7dd5-4d22-a273-38386ed868ac-memberlist\") pod \"speaker-vs5mh\" (UID: \"25542429-7dd5-4d22-a273-38386ed868ac\") " pod="metallb-system/speaker-vs5mh" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.863979 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/25542429-7dd5-4d22-a273-38386ed868ac-memberlist\") pod \"speaker-vs5mh\" (UID: \"25542429-7dd5-4d22-a273-38386ed868ac\") " pod="metallb-system/speaker-vs5mh" Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.883891 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-scrbz"] Oct 08 14:36:12 crc kubenswrapper[4624]: I1008 14:36:12.887322 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-scrbz"] Oct 08 14:36:13 crc kubenswrapper[4624]: I1008 14:36:13.005942 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-vs5mh" Oct 08 14:36:13 crc kubenswrapper[4624]: W1008 14:36:13.021980 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25542429_7dd5_4d22_a273_38386ed868ac.slice/crio-5122ca9ca8c694f361508246500fd94e3486024109db23bb132a7b0054d81682 WatchSource:0}: Error finding container 5122ca9ca8c694f361508246500fd94e3486024109db23bb132a7b0054d81682: Status 404 returned error can't find the container with id 5122ca9ca8c694f361508246500fd94e3486024109db23bb132a7b0054d81682 Oct 08 14:36:13 crc kubenswrapper[4624]: I1008 14:36:13.474494 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9335d366-9845-44a5-9876-65f2a4d0592d" path="/var/lib/kubelet/pods/9335d366-9845-44a5-9876-65f2a4d0592d/volumes" Oct 08 14:36:13 crc kubenswrapper[4624]: I1008 14:36:13.566360 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-rm4d2" event={"ID":"43a3eaca-a2c8-4508-996d-b6c977997ea7","Type":"ContainerStarted","Data":"9dd0f44cf052960200ae508824a3cf127cea3b4ce3773d74aabfc8aee66ff2f1"} Oct 08 14:36:13 crc kubenswrapper[4624]: I1008 14:36:13.566801 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-rm4d2" event={"ID":"43a3eaca-a2c8-4508-996d-b6c977997ea7","Type":"ContainerStarted","Data":"caf322355d89a09d24f7c4e8b63c5205a9e7054ccaa6498c3495eea3a2e9e9b2"} Oct 08 14:36:13 crc kubenswrapper[4624]: I1008 14:36:13.566836 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-rm4d2" Oct 08 14:36:13 crc kubenswrapper[4624]: I1008 14:36:13.566848 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-rm4d2" event={"ID":"43a3eaca-a2c8-4508-996d-b6c977997ea7","Type":"ContainerStarted","Data":"f35e12366afdcf11e56db0f6c5b1e7bd0dca708bbf874d7f99648d6568a48f7e"} Oct 08 14:36:13 crc kubenswrapper[4624]: I1008 14:36:13.573774 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vs5mh" event={"ID":"25542429-7dd5-4d22-a273-38386ed868ac","Type":"ContainerStarted","Data":"e0a18feaee54ae39956bb573200c01adf490ddf28a8c4c1a5adab9f34e009d58"} Oct 08 14:36:13 crc kubenswrapper[4624]: I1008 14:36:13.573813 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vs5mh" event={"ID":"25542429-7dd5-4d22-a273-38386ed868ac","Type":"ContainerStarted","Data":"f9eb54ea05d5981fe69c3fc827189b919a454f4af99531a9df8a308f8e8ef32c"} Oct 08 14:36:13 crc kubenswrapper[4624]: I1008 14:36:13.573823 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vs5mh" event={"ID":"25542429-7dd5-4d22-a273-38386ed868ac","Type":"ContainerStarted","Data":"5122ca9ca8c694f361508246500fd94e3486024109db23bb132a7b0054d81682"} Oct 08 14:36:13 crc kubenswrapper[4624]: I1008 14:36:13.574345 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-vs5mh" Oct 08 14:36:13 crc kubenswrapper[4624]: I1008 14:36:13.589363 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-rm4d2" podStartSLOduration=2.589341019 podStartE2EDuration="2.589341019s" podCreationTimestamp="2025-10-08 14:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:36:13.584566997 +0000 UTC m=+798.735502074" watchObservedRunningTime="2025-10-08 14:36:13.589341019 +0000 UTC m=+798.740276096" Oct 08 14:36:13 crc kubenswrapper[4624]: I1008 14:36:13.609988 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-vs5mh" podStartSLOduration=2.609966489 podStartE2EDuration="2.609966489s" podCreationTimestamp="2025-10-08 14:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:36:13.601208014 +0000 UTC m=+798.752143101" watchObservedRunningTime="2025-10-08 14:36:13.609966489 +0000 UTC m=+798.760901566" Oct 08 14:36:18 crc kubenswrapper[4624]: I1008 14:36:18.398063 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sd2tk"] Oct 08 14:36:18 crc kubenswrapper[4624]: E1008 14:36:18.398608 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9335d366-9845-44a5-9876-65f2a4d0592d" containerName="extract-content" Oct 08 14:36:18 crc kubenswrapper[4624]: I1008 14:36:18.398620 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9335d366-9845-44a5-9876-65f2a4d0592d" containerName="extract-content" Oct 08 14:36:18 crc kubenswrapper[4624]: E1008 14:36:18.398697 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9335d366-9845-44a5-9876-65f2a4d0592d" containerName="extract-utilities" Oct 08 14:36:18 crc kubenswrapper[4624]: I1008 14:36:18.398706 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9335d366-9845-44a5-9876-65f2a4d0592d" containerName="extract-utilities" Oct 08 14:36:18 crc kubenswrapper[4624]: E1008 14:36:18.398721 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9335d366-9845-44a5-9876-65f2a4d0592d" containerName="registry-server" Oct 08 14:36:18 crc kubenswrapper[4624]: I1008 14:36:18.398728 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9335d366-9845-44a5-9876-65f2a4d0592d" containerName="registry-server" Oct 08 14:36:18 crc kubenswrapper[4624]: I1008 14:36:18.399093 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9335d366-9845-44a5-9876-65f2a4d0592d" containerName="registry-server" Oct 08 14:36:18 crc kubenswrapper[4624]: I1008 14:36:18.406862 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sd2tk" Oct 08 14:36:18 crc kubenswrapper[4624]: I1008 14:36:18.419564 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sd2tk"] Oct 08 14:36:18 crc kubenswrapper[4624]: I1008 14:36:18.472266 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7tsk\" (UniqueName: \"kubernetes.io/projected/65537949-46ea-4a13-9bce-59f0b5b5f093-kube-api-access-n7tsk\") pod \"redhat-operators-sd2tk\" (UID: \"65537949-46ea-4a13-9bce-59f0b5b5f093\") " pod="openshift-marketplace/redhat-operators-sd2tk" Oct 08 14:36:18 crc kubenswrapper[4624]: I1008 14:36:18.472681 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65537949-46ea-4a13-9bce-59f0b5b5f093-utilities\") pod \"redhat-operators-sd2tk\" (UID: \"65537949-46ea-4a13-9bce-59f0b5b5f093\") " pod="openshift-marketplace/redhat-operators-sd2tk" Oct 08 14:36:18 crc kubenswrapper[4624]: I1008 14:36:18.472737 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65537949-46ea-4a13-9bce-59f0b5b5f093-catalog-content\") pod \"redhat-operators-sd2tk\" (UID: \"65537949-46ea-4a13-9bce-59f0b5b5f093\") " pod="openshift-marketplace/redhat-operators-sd2tk" Oct 08 14:36:18 crc kubenswrapper[4624]: I1008 14:36:18.574361 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7tsk\" (UniqueName: \"kubernetes.io/projected/65537949-46ea-4a13-9bce-59f0b5b5f093-kube-api-access-n7tsk\") pod \"redhat-operators-sd2tk\" (UID: \"65537949-46ea-4a13-9bce-59f0b5b5f093\") " pod="openshift-marketplace/redhat-operators-sd2tk" Oct 08 14:36:18 crc kubenswrapper[4624]: I1008 14:36:18.574424 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65537949-46ea-4a13-9bce-59f0b5b5f093-utilities\") pod \"redhat-operators-sd2tk\" (UID: \"65537949-46ea-4a13-9bce-59f0b5b5f093\") " pod="openshift-marketplace/redhat-operators-sd2tk" Oct 08 14:36:18 crc kubenswrapper[4624]: I1008 14:36:18.574498 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65537949-46ea-4a13-9bce-59f0b5b5f093-catalog-content\") pod \"redhat-operators-sd2tk\" (UID: \"65537949-46ea-4a13-9bce-59f0b5b5f093\") " pod="openshift-marketplace/redhat-operators-sd2tk" Oct 08 14:36:18 crc kubenswrapper[4624]: I1008 14:36:18.575001 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65537949-46ea-4a13-9bce-59f0b5b5f093-catalog-content\") pod \"redhat-operators-sd2tk\" (UID: \"65537949-46ea-4a13-9bce-59f0b5b5f093\") " pod="openshift-marketplace/redhat-operators-sd2tk" Oct 08 14:36:18 crc kubenswrapper[4624]: I1008 14:36:18.575264 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65537949-46ea-4a13-9bce-59f0b5b5f093-utilities\") pod \"redhat-operators-sd2tk\" (UID: \"65537949-46ea-4a13-9bce-59f0b5b5f093\") " pod="openshift-marketplace/redhat-operators-sd2tk" Oct 08 14:36:18 crc kubenswrapper[4624]: I1008 14:36:18.609482 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7tsk\" (UniqueName: \"kubernetes.io/projected/65537949-46ea-4a13-9bce-59f0b5b5f093-kube-api-access-n7tsk\") pod \"redhat-operators-sd2tk\" (UID: \"65537949-46ea-4a13-9bce-59f0b5b5f093\") " pod="openshift-marketplace/redhat-operators-sd2tk" Oct 08 14:36:18 crc kubenswrapper[4624]: I1008 14:36:18.740906 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sd2tk" Oct 08 14:36:21 crc kubenswrapper[4624]: I1008 14:36:21.063187 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sd2tk"] Oct 08 14:36:21 crc kubenswrapper[4624]: W1008 14:36:21.066946 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65537949_46ea_4a13_9bce_59f0b5b5f093.slice/crio-e7303c7788e03060566a2bdf69de5b2e3bc2317200067215dcf3acd3287e9c1a WatchSource:0}: Error finding container e7303c7788e03060566a2bdf69de5b2e3bc2317200067215dcf3acd3287e9c1a: Status 404 returned error can't find the container with id e7303c7788e03060566a2bdf69de5b2e3bc2317200067215dcf3acd3287e9c1a Oct 08 14:36:21 crc kubenswrapper[4624]: I1008 14:36:21.656098 4624 generic.go:334] "Generic (PLEG): container finished" podID="34800d6c-2287-427a-93c1-b227a3e4734d" containerID="c1a2e88429687c64c17c9a5750aaf983d76b83bf81cc76dbfa35cd0f45a6afb7" exitCode=0 Oct 08 14:36:21 crc kubenswrapper[4624]: I1008 14:36:21.656185 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j8nx2" event={"ID":"34800d6c-2287-427a-93c1-b227a3e4734d","Type":"ContainerDied","Data":"c1a2e88429687c64c17c9a5750aaf983d76b83bf81cc76dbfa35cd0f45a6afb7"} Oct 08 14:36:21 crc kubenswrapper[4624]: I1008 14:36:21.657493 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-bf8cm" event={"ID":"e36921e5-1d1e-420d-9dc1-f6aaad1bf904","Type":"ContainerStarted","Data":"189d5603068b06a36c1a7aadedb21dcb3bfa12ac675892066b4b3a16af9b86e1"} Oct 08 14:36:21 crc kubenswrapper[4624]: I1008 14:36:21.657763 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-bf8cm" Oct 08 14:36:21 crc kubenswrapper[4624]: I1008 14:36:21.660227 4624 generic.go:334] "Generic (PLEG): container finished" podID="65537949-46ea-4a13-9bce-59f0b5b5f093" containerID="4dfcf16ac4224ee3a91211118a5126532038f7df2b2b990944c85d1a2aab8051" exitCode=0 Oct 08 14:36:21 crc kubenswrapper[4624]: I1008 14:36:21.660267 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sd2tk" event={"ID":"65537949-46ea-4a13-9bce-59f0b5b5f093","Type":"ContainerDied","Data":"4dfcf16ac4224ee3a91211118a5126532038f7df2b2b990944c85d1a2aab8051"} Oct 08 14:36:21 crc kubenswrapper[4624]: I1008 14:36:21.660307 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sd2tk" event={"ID":"65537949-46ea-4a13-9bce-59f0b5b5f093","Type":"ContainerStarted","Data":"e7303c7788e03060566a2bdf69de5b2e3bc2317200067215dcf3acd3287e9c1a"} Oct 08 14:36:21 crc kubenswrapper[4624]: I1008 14:36:21.741434 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-bf8cm" podStartSLOduration=1.914586907 podStartE2EDuration="10.7414157s" podCreationTimestamp="2025-10-08 14:36:11 +0000 UTC" firstStartedPulling="2025-10-08 14:36:11.86840965 +0000 UTC m=+797.019344727" lastFinishedPulling="2025-10-08 14:36:20.695238443 +0000 UTC m=+805.846173520" observedRunningTime="2025-10-08 14:36:21.737278764 +0000 UTC m=+806.888213861" watchObservedRunningTime="2025-10-08 14:36:21.7414157 +0000 UTC m=+806.892350777" Oct 08 14:36:22 crc kubenswrapper[4624]: I1008 14:36:22.147953 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-rm4d2" Oct 08 14:36:23 crc kubenswrapper[4624]: I1008 14:36:23.010251 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-vs5mh" Oct 08 14:36:23 crc kubenswrapper[4624]: I1008 14:36:23.678573 4624 generic.go:334] "Generic (PLEG): container finished" podID="65537949-46ea-4a13-9bce-59f0b5b5f093" containerID="8aa8188fac3d160de2d13fd87b7d446dbd33bd3ae851cdf309a7b7b084858f63" exitCode=0 Oct 08 14:36:23 crc kubenswrapper[4624]: I1008 14:36:23.678658 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sd2tk" event={"ID":"65537949-46ea-4a13-9bce-59f0b5b5f093","Type":"ContainerDied","Data":"8aa8188fac3d160de2d13fd87b7d446dbd33bd3ae851cdf309a7b7b084858f63"} Oct 08 14:36:23 crc kubenswrapper[4624]: I1008 14:36:23.680857 4624 generic.go:334] "Generic (PLEG): container finished" podID="34800d6c-2287-427a-93c1-b227a3e4734d" containerID="d811870c577e3dd6b4b49db4a20c82a8631a13ba0babc7c77974356011e59860" exitCode=0 Oct 08 14:36:23 crc kubenswrapper[4624]: I1008 14:36:23.680908 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j8nx2" event={"ID":"34800d6c-2287-427a-93c1-b227a3e4734d","Type":"ContainerDied","Data":"d811870c577e3dd6b4b49db4a20c82a8631a13ba0babc7c77974356011e59860"} Oct 08 14:36:24 crc kubenswrapper[4624]: I1008 14:36:24.689181 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sd2tk" event={"ID":"65537949-46ea-4a13-9bce-59f0b5b5f093","Type":"ContainerStarted","Data":"a88ee158e1f84427892b67205eef62942611cda49aa72266cd707feb0ecfee64"} Oct 08 14:36:24 crc kubenswrapper[4624]: I1008 14:36:24.691421 4624 generic.go:334] "Generic (PLEG): container finished" podID="34800d6c-2287-427a-93c1-b227a3e4734d" containerID="a646b5e104f1cd887f5f7dc368e5eb289366d5eb981f142e84299c8bc6ffaa0f" exitCode=0 Oct 08 14:36:24 crc kubenswrapper[4624]: I1008 14:36:24.691462 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j8nx2" event={"ID":"34800d6c-2287-427a-93c1-b227a3e4734d","Type":"ContainerDied","Data":"a646b5e104f1cd887f5f7dc368e5eb289366d5eb981f142e84299c8bc6ffaa0f"} Oct 08 14:36:24 crc kubenswrapper[4624]: I1008 14:36:24.716147 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sd2tk" podStartSLOduration=4.248521863 podStartE2EDuration="6.716130781s" podCreationTimestamp="2025-10-08 14:36:18 +0000 UTC" firstStartedPulling="2025-10-08 14:36:21.661096326 +0000 UTC m=+806.812031403" lastFinishedPulling="2025-10-08 14:36:24.128705234 +0000 UTC m=+809.279640321" observedRunningTime="2025-10-08 14:36:24.716108991 +0000 UTC m=+809.867044068" watchObservedRunningTime="2025-10-08 14:36:24.716130781 +0000 UTC m=+809.867065858" Oct 08 14:36:25 crc kubenswrapper[4624]: I1008 14:36:25.705921 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j8nx2" event={"ID":"34800d6c-2287-427a-93c1-b227a3e4734d","Type":"ContainerStarted","Data":"fe28d2b32641513410d4e9bb590aea34228e35a9e5364740888b44db9bf2a5f3"} Oct 08 14:36:25 crc kubenswrapper[4624]: I1008 14:36:25.706295 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j8nx2" event={"ID":"34800d6c-2287-427a-93c1-b227a3e4734d","Type":"ContainerStarted","Data":"40d047fa35c4d08356156b472c0c735a2147cc940a059179ab3962a250f7e117"} Oct 08 14:36:25 crc kubenswrapper[4624]: I1008 14:36:25.706311 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j8nx2" event={"ID":"34800d6c-2287-427a-93c1-b227a3e4734d","Type":"ContainerStarted","Data":"d89f429bc872fcda4c68de9fc458b68f48f9653dab0112f8d56fcbc5e5bc845c"} Oct 08 14:36:25 crc kubenswrapper[4624]: I1008 14:36:25.706323 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j8nx2" event={"ID":"34800d6c-2287-427a-93c1-b227a3e4734d","Type":"ContainerStarted","Data":"db8592ef591e2adc8f9c4bef5b40209dc14386f6c63ae0a8a8659d840994cdf5"} Oct 08 14:36:25 crc kubenswrapper[4624]: I1008 14:36:25.706336 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j8nx2" event={"ID":"34800d6c-2287-427a-93c1-b227a3e4734d","Type":"ContainerStarted","Data":"6b8cb502052773bce6acf14c03fd2b728dcbb83ca99f3a52fcafdca622b68b7e"} Oct 08 14:36:26 crc kubenswrapper[4624]: I1008 14:36:26.715205 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j8nx2" event={"ID":"34800d6c-2287-427a-93c1-b227a3e4734d","Type":"ContainerStarted","Data":"409c444357f4f833ae682e06a5ba61418708ae0e4c56bb62245ce4dbcfc85f1e"} Oct 08 14:36:26 crc kubenswrapper[4624]: I1008 14:36:26.715421 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:26 crc kubenswrapper[4624]: I1008 14:36:26.737947 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-j8nx2" podStartSLOduration=6.634393739 podStartE2EDuration="15.737930122s" podCreationTimestamp="2025-10-08 14:36:11 +0000 UTC" firstStartedPulling="2025-10-08 14:36:11.57030885 +0000 UTC m=+796.721243927" lastFinishedPulling="2025-10-08 14:36:20.673845233 +0000 UTC m=+805.824780310" observedRunningTime="2025-10-08 14:36:26.735461058 +0000 UTC m=+811.886396145" watchObservedRunningTime="2025-10-08 14:36:26.737930122 +0000 UTC m=+811.888865199" Oct 08 14:36:27 crc kubenswrapper[4624]: I1008 14:36:27.795371 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-pcpbt"] Oct 08 14:36:27 crc kubenswrapper[4624]: I1008 14:36:27.796194 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pcpbt" Oct 08 14:36:27 crc kubenswrapper[4624]: I1008 14:36:27.797931 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-gztv7" Oct 08 14:36:27 crc kubenswrapper[4624]: I1008 14:36:27.798222 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 08 14:36:27 crc kubenswrapper[4624]: I1008 14:36:27.798222 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 08 14:36:27 crc kubenswrapper[4624]: I1008 14:36:27.837225 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pcpbt"] Oct 08 14:36:27 crc kubenswrapper[4624]: I1008 14:36:27.915760 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7886q\" (UniqueName: \"kubernetes.io/projected/5dc2ed19-149e-4394-9186-33ad7f6dcde8-kube-api-access-7886q\") pod \"openstack-operator-index-pcpbt\" (UID: \"5dc2ed19-149e-4394-9186-33ad7f6dcde8\") " pod="openstack-operators/openstack-operator-index-pcpbt" Oct 08 14:36:28 crc kubenswrapper[4624]: I1008 14:36:28.017470 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7886q\" (UniqueName: \"kubernetes.io/projected/5dc2ed19-149e-4394-9186-33ad7f6dcde8-kube-api-access-7886q\") pod \"openstack-operator-index-pcpbt\" (UID: \"5dc2ed19-149e-4394-9186-33ad7f6dcde8\") " pod="openstack-operators/openstack-operator-index-pcpbt" Oct 08 14:36:28 crc kubenswrapper[4624]: I1008 14:36:28.037423 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7886q\" (UniqueName: \"kubernetes.io/projected/5dc2ed19-149e-4394-9186-33ad7f6dcde8-kube-api-access-7886q\") pod \"openstack-operator-index-pcpbt\" (UID: \"5dc2ed19-149e-4394-9186-33ad7f6dcde8\") " pod="openstack-operators/openstack-operator-index-pcpbt" Oct 08 14:36:28 crc kubenswrapper[4624]: I1008 14:36:28.110130 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pcpbt" Oct 08 14:36:28 crc kubenswrapper[4624]: I1008 14:36:28.570593 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pcpbt"] Oct 08 14:36:28 crc kubenswrapper[4624]: I1008 14:36:28.733776 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pcpbt" event={"ID":"5dc2ed19-149e-4394-9186-33ad7f6dcde8","Type":"ContainerStarted","Data":"be9f6b1c1f9dcf5cc3001ccf306fcc6d8f16a31f9627338bcab1ddf409cf0d52"} Oct 08 14:36:28 crc kubenswrapper[4624]: I1008 14:36:28.742185 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sd2tk" Oct 08 14:36:28 crc kubenswrapper[4624]: I1008 14:36:28.742236 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sd2tk" Oct 08 14:36:29 crc kubenswrapper[4624]: I1008 14:36:29.787894 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sd2tk" podUID="65537949-46ea-4a13-9bce-59f0b5b5f093" containerName="registry-server" probeResult="failure" output=< Oct 08 14:36:29 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 14:36:29 crc kubenswrapper[4624]: > Oct 08 14:36:31 crc kubenswrapper[4624]: I1008 14:36:31.373210 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:31 crc kubenswrapper[4624]: I1008 14:36:31.401047 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-bf8cm" Oct 08 14:36:31 crc kubenswrapper[4624]: I1008 14:36:31.413133 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:31 crc kubenswrapper[4624]: I1008 14:36:31.990508 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pcpbt"] Oct 08 14:36:32 crc kubenswrapper[4624]: I1008 14:36:32.400607 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9vmx2"] Oct 08 14:36:32 crc kubenswrapper[4624]: I1008 14:36:32.401937 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9vmx2" Oct 08 14:36:32 crc kubenswrapper[4624]: I1008 14:36:32.405525 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9vmx2"] Oct 08 14:36:32 crc kubenswrapper[4624]: I1008 14:36:32.493705 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2566w\" (UniqueName: \"kubernetes.io/projected/7c422ddf-650f-4f8d-828e-12a834b70bab-kube-api-access-2566w\") pod \"openstack-operator-index-9vmx2\" (UID: \"7c422ddf-650f-4f8d-828e-12a834b70bab\") " pod="openstack-operators/openstack-operator-index-9vmx2" Oct 08 14:36:32 crc kubenswrapper[4624]: I1008 14:36:32.594905 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2566w\" (UniqueName: \"kubernetes.io/projected/7c422ddf-650f-4f8d-828e-12a834b70bab-kube-api-access-2566w\") pod \"openstack-operator-index-9vmx2\" (UID: \"7c422ddf-650f-4f8d-828e-12a834b70bab\") " pod="openstack-operators/openstack-operator-index-9vmx2" Oct 08 14:36:32 crc kubenswrapper[4624]: I1008 14:36:32.613039 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2566w\" (UniqueName: \"kubernetes.io/projected/7c422ddf-650f-4f8d-828e-12a834b70bab-kube-api-access-2566w\") pod \"openstack-operator-index-9vmx2\" (UID: \"7c422ddf-650f-4f8d-828e-12a834b70bab\") " pod="openstack-operators/openstack-operator-index-9vmx2" Oct 08 14:36:32 crc kubenswrapper[4624]: I1008 14:36:32.724333 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9vmx2" Oct 08 14:36:34 crc kubenswrapper[4624]: I1008 14:36:34.038587 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9vmx2"] Oct 08 14:36:34 crc kubenswrapper[4624]: I1008 14:36:34.775389 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9vmx2" event={"ID":"7c422ddf-650f-4f8d-828e-12a834b70bab","Type":"ContainerStarted","Data":"09728431311b213220753111c82b0eb5ad3326a95c3b667c93a9b7c7e86a4cad"} Oct 08 14:36:34 crc kubenswrapper[4624]: I1008 14:36:34.776048 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9vmx2" event={"ID":"7c422ddf-650f-4f8d-828e-12a834b70bab","Type":"ContainerStarted","Data":"a97c007eaef2017d1d599c401b68d461eb06616dbbde0debb85eec297adb4a63"} Oct 08 14:36:34 crc kubenswrapper[4624]: I1008 14:36:34.778705 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pcpbt" event={"ID":"5dc2ed19-149e-4394-9186-33ad7f6dcde8","Type":"ContainerStarted","Data":"04da4480fb2a0a8924065a410dbb9705ab073d9baab7e367b8915dee4d1b2774"} Oct 08 14:36:34 crc kubenswrapper[4624]: I1008 14:36:34.778915 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-pcpbt" podUID="5dc2ed19-149e-4394-9186-33ad7f6dcde8" containerName="registry-server" containerID="cri-o://04da4480fb2a0a8924065a410dbb9705ab073d9baab7e367b8915dee4d1b2774" gracePeriod=2 Oct 08 14:36:34 crc kubenswrapper[4624]: I1008 14:36:34.802100 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9vmx2" podStartSLOduration=2.748580878 podStartE2EDuration="2.801989601s" podCreationTimestamp="2025-10-08 14:36:32 +0000 UTC" firstStartedPulling="2025-10-08 14:36:34.052211591 +0000 UTC m=+819.203146678" lastFinishedPulling="2025-10-08 14:36:34.105620324 +0000 UTC m=+819.256555401" observedRunningTime="2025-10-08 14:36:34.797608908 +0000 UTC m=+819.948543995" watchObservedRunningTime="2025-10-08 14:36:34.801989601 +0000 UTC m=+819.952924728" Oct 08 14:36:34 crc kubenswrapper[4624]: I1008 14:36:34.825352 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-pcpbt" podStartSLOduration=2.668028577 podStartE2EDuration="7.825326131s" podCreationTimestamp="2025-10-08 14:36:27 +0000 UTC" firstStartedPulling="2025-10-08 14:36:28.58102509 +0000 UTC m=+813.731960167" lastFinishedPulling="2025-10-08 14:36:33.738322644 +0000 UTC m=+818.889257721" observedRunningTime="2025-10-08 14:36:34.823548505 +0000 UTC m=+819.974483592" watchObservedRunningTime="2025-10-08 14:36:34.825326131 +0000 UTC m=+819.976261208" Oct 08 14:36:35 crc kubenswrapper[4624]: I1008 14:36:35.155279 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pcpbt" Oct 08 14:36:35 crc kubenswrapper[4624]: I1008 14:36:35.228564 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7886q\" (UniqueName: \"kubernetes.io/projected/5dc2ed19-149e-4394-9186-33ad7f6dcde8-kube-api-access-7886q\") pod \"5dc2ed19-149e-4394-9186-33ad7f6dcde8\" (UID: \"5dc2ed19-149e-4394-9186-33ad7f6dcde8\") " Oct 08 14:36:35 crc kubenswrapper[4624]: I1008 14:36:35.234329 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dc2ed19-149e-4394-9186-33ad7f6dcde8-kube-api-access-7886q" (OuterVolumeSpecName: "kube-api-access-7886q") pod "5dc2ed19-149e-4394-9186-33ad7f6dcde8" (UID: "5dc2ed19-149e-4394-9186-33ad7f6dcde8"). InnerVolumeSpecName "kube-api-access-7886q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:36:35 crc kubenswrapper[4624]: I1008 14:36:35.330732 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7886q\" (UniqueName: \"kubernetes.io/projected/5dc2ed19-149e-4394-9186-33ad7f6dcde8-kube-api-access-7886q\") on node \"crc\" DevicePath \"\"" Oct 08 14:36:35 crc kubenswrapper[4624]: I1008 14:36:35.783563 4624 generic.go:334] "Generic (PLEG): container finished" podID="5dc2ed19-149e-4394-9186-33ad7f6dcde8" containerID="04da4480fb2a0a8924065a410dbb9705ab073d9baab7e367b8915dee4d1b2774" exitCode=0 Oct 08 14:36:35 crc kubenswrapper[4624]: I1008 14:36:35.783610 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pcpbt" Oct 08 14:36:35 crc kubenswrapper[4624]: I1008 14:36:35.783667 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pcpbt" event={"ID":"5dc2ed19-149e-4394-9186-33ad7f6dcde8","Type":"ContainerDied","Data":"04da4480fb2a0a8924065a410dbb9705ab073d9baab7e367b8915dee4d1b2774"} Oct 08 14:36:35 crc kubenswrapper[4624]: I1008 14:36:35.783708 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pcpbt" event={"ID":"5dc2ed19-149e-4394-9186-33ad7f6dcde8","Type":"ContainerDied","Data":"be9f6b1c1f9dcf5cc3001ccf306fcc6d8f16a31f9627338bcab1ddf409cf0d52"} Oct 08 14:36:35 crc kubenswrapper[4624]: I1008 14:36:35.783729 4624 scope.go:117] "RemoveContainer" containerID="04da4480fb2a0a8924065a410dbb9705ab073d9baab7e367b8915dee4d1b2774" Oct 08 14:36:35 crc kubenswrapper[4624]: I1008 14:36:35.803350 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pcpbt"] Oct 08 14:36:35 crc kubenswrapper[4624]: I1008 14:36:35.806908 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-pcpbt"] Oct 08 14:36:35 crc kubenswrapper[4624]: I1008 14:36:35.823340 4624 scope.go:117] "RemoveContainer" containerID="04da4480fb2a0a8924065a410dbb9705ab073d9baab7e367b8915dee4d1b2774" Oct 08 14:36:35 crc kubenswrapper[4624]: E1008 14:36:35.823720 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04da4480fb2a0a8924065a410dbb9705ab073d9baab7e367b8915dee4d1b2774\": container with ID starting with 04da4480fb2a0a8924065a410dbb9705ab073d9baab7e367b8915dee4d1b2774 not found: ID does not exist" containerID="04da4480fb2a0a8924065a410dbb9705ab073d9baab7e367b8915dee4d1b2774" Oct 08 14:36:35 crc kubenswrapper[4624]: I1008 14:36:35.823770 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04da4480fb2a0a8924065a410dbb9705ab073d9baab7e367b8915dee4d1b2774"} err="failed to get container status \"04da4480fb2a0a8924065a410dbb9705ab073d9baab7e367b8915dee4d1b2774\": rpc error: code = NotFound desc = could not find container \"04da4480fb2a0a8924065a410dbb9705ab073d9baab7e367b8915dee4d1b2774\": container with ID starting with 04da4480fb2a0a8924065a410dbb9705ab073d9baab7e367b8915dee4d1b2774 not found: ID does not exist" Oct 08 14:36:37 crc kubenswrapper[4624]: I1008 14:36:37.474530 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dc2ed19-149e-4394-9186-33ad7f6dcde8" path="/var/lib/kubelet/pods/5dc2ed19-149e-4394-9186-33ad7f6dcde8/volumes" Oct 08 14:36:38 crc kubenswrapper[4624]: I1008 14:36:38.781351 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sd2tk" Oct 08 14:36:38 crc kubenswrapper[4624]: I1008 14:36:38.843280 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sd2tk" Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.187835 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sd2tk"] Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.188100 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sd2tk" podUID="65537949-46ea-4a13-9bce-59f0b5b5f093" containerName="registry-server" containerID="cri-o://a88ee158e1f84427892b67205eef62942611cda49aa72266cd707feb0ecfee64" gracePeriod=2 Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.376689 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-j8nx2" Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.645549 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sd2tk" Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.712413 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65537949-46ea-4a13-9bce-59f0b5b5f093-utilities\") pod \"65537949-46ea-4a13-9bce-59f0b5b5f093\" (UID: \"65537949-46ea-4a13-9bce-59f0b5b5f093\") " Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.712502 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65537949-46ea-4a13-9bce-59f0b5b5f093-catalog-content\") pod \"65537949-46ea-4a13-9bce-59f0b5b5f093\" (UID: \"65537949-46ea-4a13-9bce-59f0b5b5f093\") " Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.712545 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7tsk\" (UniqueName: \"kubernetes.io/projected/65537949-46ea-4a13-9bce-59f0b5b5f093-kube-api-access-n7tsk\") pod \"65537949-46ea-4a13-9bce-59f0b5b5f093\" (UID: \"65537949-46ea-4a13-9bce-59f0b5b5f093\") " Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.713398 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65537949-46ea-4a13-9bce-59f0b5b5f093-utilities" (OuterVolumeSpecName: "utilities") pod "65537949-46ea-4a13-9bce-59f0b5b5f093" (UID: "65537949-46ea-4a13-9bce-59f0b5b5f093"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.717807 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65537949-46ea-4a13-9bce-59f0b5b5f093-kube-api-access-n7tsk" (OuterVolumeSpecName: "kube-api-access-n7tsk") pod "65537949-46ea-4a13-9bce-59f0b5b5f093" (UID: "65537949-46ea-4a13-9bce-59f0b5b5f093"). InnerVolumeSpecName "kube-api-access-n7tsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.805517 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65537949-46ea-4a13-9bce-59f0b5b5f093-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65537949-46ea-4a13-9bce-59f0b5b5f093" (UID: "65537949-46ea-4a13-9bce-59f0b5b5f093"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.814059 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65537949-46ea-4a13-9bce-59f0b5b5f093-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.814097 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65537949-46ea-4a13-9bce-59f0b5b5f093-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.814115 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7tsk\" (UniqueName: \"kubernetes.io/projected/65537949-46ea-4a13-9bce-59f0b5b5f093-kube-api-access-n7tsk\") on node \"crc\" DevicePath \"\"" Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.824664 4624 generic.go:334] "Generic (PLEG): container finished" podID="65537949-46ea-4a13-9bce-59f0b5b5f093" containerID="a88ee158e1f84427892b67205eef62942611cda49aa72266cd707feb0ecfee64" exitCode=0 Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.824714 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sd2tk" event={"ID":"65537949-46ea-4a13-9bce-59f0b5b5f093","Type":"ContainerDied","Data":"a88ee158e1f84427892b67205eef62942611cda49aa72266cd707feb0ecfee64"} Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.824746 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sd2tk" event={"ID":"65537949-46ea-4a13-9bce-59f0b5b5f093","Type":"ContainerDied","Data":"e7303c7788e03060566a2bdf69de5b2e3bc2317200067215dcf3acd3287e9c1a"} Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.824756 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sd2tk" Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.824767 4624 scope.go:117] "RemoveContainer" containerID="a88ee158e1f84427892b67205eef62942611cda49aa72266cd707feb0ecfee64" Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.845310 4624 scope.go:117] "RemoveContainer" containerID="8aa8188fac3d160de2d13fd87b7d446dbd33bd3ae851cdf309a7b7b084858f63" Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.860194 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sd2tk"] Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.866567 4624 scope.go:117] "RemoveContainer" containerID="4dfcf16ac4224ee3a91211118a5126532038f7df2b2b990944c85d1a2aab8051" Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.874325 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sd2tk"] Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.886897 4624 scope.go:117] "RemoveContainer" containerID="a88ee158e1f84427892b67205eef62942611cda49aa72266cd707feb0ecfee64" Oct 08 14:36:41 crc kubenswrapper[4624]: E1008 14:36:41.887396 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a88ee158e1f84427892b67205eef62942611cda49aa72266cd707feb0ecfee64\": container with ID starting with a88ee158e1f84427892b67205eef62942611cda49aa72266cd707feb0ecfee64 not found: ID does not exist" containerID="a88ee158e1f84427892b67205eef62942611cda49aa72266cd707feb0ecfee64" Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.887428 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a88ee158e1f84427892b67205eef62942611cda49aa72266cd707feb0ecfee64"} err="failed to get container status \"a88ee158e1f84427892b67205eef62942611cda49aa72266cd707feb0ecfee64\": rpc error: code = NotFound desc = could not find container \"a88ee158e1f84427892b67205eef62942611cda49aa72266cd707feb0ecfee64\": container with ID starting with a88ee158e1f84427892b67205eef62942611cda49aa72266cd707feb0ecfee64 not found: ID does not exist" Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.887448 4624 scope.go:117] "RemoveContainer" containerID="8aa8188fac3d160de2d13fd87b7d446dbd33bd3ae851cdf309a7b7b084858f63" Oct 08 14:36:41 crc kubenswrapper[4624]: E1008 14:36:41.888037 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aa8188fac3d160de2d13fd87b7d446dbd33bd3ae851cdf309a7b7b084858f63\": container with ID starting with 8aa8188fac3d160de2d13fd87b7d446dbd33bd3ae851cdf309a7b7b084858f63 not found: ID does not exist" containerID="8aa8188fac3d160de2d13fd87b7d446dbd33bd3ae851cdf309a7b7b084858f63" Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.888154 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aa8188fac3d160de2d13fd87b7d446dbd33bd3ae851cdf309a7b7b084858f63"} err="failed to get container status \"8aa8188fac3d160de2d13fd87b7d446dbd33bd3ae851cdf309a7b7b084858f63\": rpc error: code = NotFound desc = could not find container \"8aa8188fac3d160de2d13fd87b7d446dbd33bd3ae851cdf309a7b7b084858f63\": container with ID starting with 8aa8188fac3d160de2d13fd87b7d446dbd33bd3ae851cdf309a7b7b084858f63 not found: ID does not exist" Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.888192 4624 scope.go:117] "RemoveContainer" containerID="4dfcf16ac4224ee3a91211118a5126532038f7df2b2b990944c85d1a2aab8051" Oct 08 14:36:41 crc kubenswrapper[4624]: E1008 14:36:41.888616 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dfcf16ac4224ee3a91211118a5126532038f7df2b2b990944c85d1a2aab8051\": container with ID starting with 4dfcf16ac4224ee3a91211118a5126532038f7df2b2b990944c85d1a2aab8051 not found: ID does not exist" containerID="4dfcf16ac4224ee3a91211118a5126532038f7df2b2b990944c85d1a2aab8051" Oct 08 14:36:41 crc kubenswrapper[4624]: I1008 14:36:41.888665 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dfcf16ac4224ee3a91211118a5126532038f7df2b2b990944c85d1a2aab8051"} err="failed to get container status \"4dfcf16ac4224ee3a91211118a5126532038f7df2b2b990944c85d1a2aab8051\": rpc error: code = NotFound desc = could not find container \"4dfcf16ac4224ee3a91211118a5126532038f7df2b2b990944c85d1a2aab8051\": container with ID starting with 4dfcf16ac4224ee3a91211118a5126532038f7df2b2b990944c85d1a2aab8051 not found: ID does not exist" Oct 08 14:36:42 crc kubenswrapper[4624]: I1008 14:36:42.725219 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-9vmx2" Oct 08 14:36:42 crc kubenswrapper[4624]: I1008 14:36:42.725263 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-9vmx2" Oct 08 14:36:42 crc kubenswrapper[4624]: I1008 14:36:42.750103 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-9vmx2" Oct 08 14:36:42 crc kubenswrapper[4624]: I1008 14:36:42.852288 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-9vmx2" Oct 08 14:36:43 crc kubenswrapper[4624]: I1008 14:36:43.518148 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65537949-46ea-4a13-9bce-59f0b5b5f093" path="/var/lib/kubelet/pods/65537949-46ea-4a13-9bce-59f0b5b5f093/volumes" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.631103 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj"] Oct 08 14:36:45 crc kubenswrapper[4624]: E1008 14:36:45.631325 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65537949-46ea-4a13-9bce-59f0b5b5f093" containerName="extract-utilities" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.631337 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="65537949-46ea-4a13-9bce-59f0b5b5f093" containerName="extract-utilities" Oct 08 14:36:45 crc kubenswrapper[4624]: E1008 14:36:45.631347 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65537949-46ea-4a13-9bce-59f0b5b5f093" containerName="registry-server" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.631353 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="65537949-46ea-4a13-9bce-59f0b5b5f093" containerName="registry-server" Oct 08 14:36:45 crc kubenswrapper[4624]: E1008 14:36:45.631366 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dc2ed19-149e-4394-9186-33ad7f6dcde8" containerName="registry-server" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.631374 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dc2ed19-149e-4394-9186-33ad7f6dcde8" containerName="registry-server" Oct 08 14:36:45 crc kubenswrapper[4624]: E1008 14:36:45.631385 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65537949-46ea-4a13-9bce-59f0b5b5f093" containerName="extract-content" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.631391 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="65537949-46ea-4a13-9bce-59f0b5b5f093" containerName="extract-content" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.631486 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dc2ed19-149e-4394-9186-33ad7f6dcde8" containerName="registry-server" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.631498 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="65537949-46ea-4a13-9bce-59f0b5b5f093" containerName="registry-server" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.632217 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.633896 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-sdtrk" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.648856 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj"] Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.661352 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a24c625-ff51-434c-9db2-3677640cbdef-util\") pod \"515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj\" (UID: \"6a24c625-ff51-434c-9db2-3677640cbdef\") " pod="openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.661417 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a24c625-ff51-434c-9db2-3677640cbdef-bundle\") pod \"515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj\" (UID: \"6a24c625-ff51-434c-9db2-3677640cbdef\") " pod="openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.661448 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbdxc\" (UniqueName: \"kubernetes.io/projected/6a24c625-ff51-434c-9db2-3677640cbdef-kube-api-access-nbdxc\") pod \"515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj\" (UID: \"6a24c625-ff51-434c-9db2-3677640cbdef\") " pod="openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.763088 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a24c625-ff51-434c-9db2-3677640cbdef-util\") pod \"515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj\" (UID: \"6a24c625-ff51-434c-9db2-3677640cbdef\") " pod="openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.763351 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a24c625-ff51-434c-9db2-3677640cbdef-bundle\") pod \"515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj\" (UID: \"6a24c625-ff51-434c-9db2-3677640cbdef\") " pod="openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.763396 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbdxc\" (UniqueName: \"kubernetes.io/projected/6a24c625-ff51-434c-9db2-3677640cbdef-kube-api-access-nbdxc\") pod \"515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj\" (UID: \"6a24c625-ff51-434c-9db2-3677640cbdef\") " pod="openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.763868 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a24c625-ff51-434c-9db2-3677640cbdef-util\") pod \"515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj\" (UID: \"6a24c625-ff51-434c-9db2-3677640cbdef\") " pod="openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.764137 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a24c625-ff51-434c-9db2-3677640cbdef-bundle\") pod \"515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj\" (UID: \"6a24c625-ff51-434c-9db2-3677640cbdef\") " pod="openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.785694 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbdxc\" (UniqueName: \"kubernetes.io/projected/6a24c625-ff51-434c-9db2-3677640cbdef-kube-api-access-nbdxc\") pod \"515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj\" (UID: \"6a24c625-ff51-434c-9db2-3677640cbdef\") " pod="openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj" Oct 08 14:36:45 crc kubenswrapper[4624]: I1008 14:36:45.947823 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj" Oct 08 14:36:46 crc kubenswrapper[4624]: I1008 14:36:46.331003 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj"] Oct 08 14:36:46 crc kubenswrapper[4624]: I1008 14:36:46.859515 4624 generic.go:334] "Generic (PLEG): container finished" podID="6a24c625-ff51-434c-9db2-3677640cbdef" containerID="6c4c94fceb0694d8f4d2007a91545dcdb812d349c794cbd349ccae0575867618" exitCode=0 Oct 08 14:36:46 crc kubenswrapper[4624]: I1008 14:36:46.859557 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj" event={"ID":"6a24c625-ff51-434c-9db2-3677640cbdef","Type":"ContainerDied","Data":"6c4c94fceb0694d8f4d2007a91545dcdb812d349c794cbd349ccae0575867618"} Oct 08 14:36:46 crc kubenswrapper[4624]: I1008 14:36:46.859580 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj" event={"ID":"6a24c625-ff51-434c-9db2-3677640cbdef","Type":"ContainerStarted","Data":"a8007fd9f4f350dc7f227b9a1f7896f55780590f3379fc4e93b75bb57acb8597"} Oct 08 14:36:47 crc kubenswrapper[4624]: I1008 14:36:47.866062 4624 generic.go:334] "Generic (PLEG): container finished" podID="6a24c625-ff51-434c-9db2-3677640cbdef" containerID="3c712f7d7a3ed334c91c4e416ab78c68d50662d6230fb3546d4bb76b2d81575d" exitCode=0 Oct 08 14:36:47 crc kubenswrapper[4624]: I1008 14:36:47.866122 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj" event={"ID":"6a24c625-ff51-434c-9db2-3677640cbdef","Type":"ContainerDied","Data":"3c712f7d7a3ed334c91c4e416ab78c68d50662d6230fb3546d4bb76b2d81575d"} Oct 08 14:36:48 crc kubenswrapper[4624]: I1008 14:36:48.875860 4624 generic.go:334] "Generic (PLEG): container finished" podID="6a24c625-ff51-434c-9db2-3677640cbdef" containerID="391771756e3dd76eaab6d5c11a9e8a87c3f6c29a9a6a133c7ee19c4413359033" exitCode=0 Oct 08 14:36:48 crc kubenswrapper[4624]: I1008 14:36:48.875977 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj" event={"ID":"6a24c625-ff51-434c-9db2-3677640cbdef","Type":"ContainerDied","Data":"391771756e3dd76eaab6d5c11a9e8a87c3f6c29a9a6a133c7ee19c4413359033"} Oct 08 14:36:50 crc kubenswrapper[4624]: I1008 14:36:50.140484 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj" Oct 08 14:36:50 crc kubenswrapper[4624]: I1008 14:36:50.219339 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a24c625-ff51-434c-9db2-3677640cbdef-bundle\") pod \"6a24c625-ff51-434c-9db2-3677640cbdef\" (UID: \"6a24c625-ff51-434c-9db2-3677640cbdef\") " Oct 08 14:36:50 crc kubenswrapper[4624]: I1008 14:36:50.219439 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a24c625-ff51-434c-9db2-3677640cbdef-util\") pod \"6a24c625-ff51-434c-9db2-3677640cbdef\" (UID: \"6a24c625-ff51-434c-9db2-3677640cbdef\") " Oct 08 14:36:50 crc kubenswrapper[4624]: I1008 14:36:50.219494 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbdxc\" (UniqueName: \"kubernetes.io/projected/6a24c625-ff51-434c-9db2-3677640cbdef-kube-api-access-nbdxc\") pod \"6a24c625-ff51-434c-9db2-3677640cbdef\" (UID: \"6a24c625-ff51-434c-9db2-3677640cbdef\") " Oct 08 14:36:50 crc kubenswrapper[4624]: I1008 14:36:50.220070 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a24c625-ff51-434c-9db2-3677640cbdef-bundle" (OuterVolumeSpecName: "bundle") pod "6a24c625-ff51-434c-9db2-3677640cbdef" (UID: "6a24c625-ff51-434c-9db2-3677640cbdef"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:36:50 crc kubenswrapper[4624]: I1008 14:36:50.226788 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a24c625-ff51-434c-9db2-3677640cbdef-kube-api-access-nbdxc" (OuterVolumeSpecName: "kube-api-access-nbdxc") pod "6a24c625-ff51-434c-9db2-3677640cbdef" (UID: "6a24c625-ff51-434c-9db2-3677640cbdef"). InnerVolumeSpecName "kube-api-access-nbdxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:36:50 crc kubenswrapper[4624]: I1008 14:36:50.237385 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a24c625-ff51-434c-9db2-3677640cbdef-util" (OuterVolumeSpecName: "util") pod "6a24c625-ff51-434c-9db2-3677640cbdef" (UID: "6a24c625-ff51-434c-9db2-3677640cbdef"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:36:50 crc kubenswrapper[4624]: I1008 14:36:50.320720 4624 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a24c625-ff51-434c-9db2-3677640cbdef-util\") on node \"crc\" DevicePath \"\"" Oct 08 14:36:50 crc kubenswrapper[4624]: I1008 14:36:50.320760 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbdxc\" (UniqueName: \"kubernetes.io/projected/6a24c625-ff51-434c-9db2-3677640cbdef-kube-api-access-nbdxc\") on node \"crc\" DevicePath \"\"" Oct 08 14:36:50 crc kubenswrapper[4624]: I1008 14:36:50.320777 4624 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a24c625-ff51-434c-9db2-3677640cbdef-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:36:50 crc kubenswrapper[4624]: I1008 14:36:50.886871 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj" event={"ID":"6a24c625-ff51-434c-9db2-3677640cbdef","Type":"ContainerDied","Data":"a8007fd9f4f350dc7f227b9a1f7896f55780590f3379fc4e93b75bb57acb8597"} Oct 08 14:36:50 crc kubenswrapper[4624]: I1008 14:36:50.886907 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8007fd9f4f350dc7f227b9a1f7896f55780590f3379fc4e93b75bb57acb8597" Oct 08 14:36:50 crc kubenswrapper[4624]: I1008 14:36:50.887243 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj" Oct 08 14:36:53 crc kubenswrapper[4624]: I1008 14:36:53.401531 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4shxn"] Oct 08 14:36:53 crc kubenswrapper[4624]: E1008 14:36:53.402412 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a24c625-ff51-434c-9db2-3677640cbdef" containerName="pull" Oct 08 14:36:53 crc kubenswrapper[4624]: I1008 14:36:53.402429 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a24c625-ff51-434c-9db2-3677640cbdef" containerName="pull" Oct 08 14:36:53 crc kubenswrapper[4624]: E1008 14:36:53.402449 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a24c625-ff51-434c-9db2-3677640cbdef" containerName="util" Oct 08 14:36:53 crc kubenswrapper[4624]: I1008 14:36:53.402456 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a24c625-ff51-434c-9db2-3677640cbdef" containerName="util" Oct 08 14:36:53 crc kubenswrapper[4624]: E1008 14:36:53.402479 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a24c625-ff51-434c-9db2-3677640cbdef" containerName="extract" Oct 08 14:36:53 crc kubenswrapper[4624]: I1008 14:36:53.402487 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a24c625-ff51-434c-9db2-3677640cbdef" containerName="extract" Oct 08 14:36:53 crc kubenswrapper[4624]: I1008 14:36:53.402767 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a24c625-ff51-434c-9db2-3677640cbdef" containerName="extract" Oct 08 14:36:53 crc kubenswrapper[4624]: I1008 14:36:53.405009 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4shxn" Oct 08 14:36:53 crc kubenswrapper[4624]: I1008 14:36:53.441096 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4shxn"] Oct 08 14:36:53 crc kubenswrapper[4624]: I1008 14:36:53.458573 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04f3cc2a-3cee-47d9-bb95-6579966e6064-utilities\") pod \"certified-operators-4shxn\" (UID: \"04f3cc2a-3cee-47d9-bb95-6579966e6064\") " pod="openshift-marketplace/certified-operators-4shxn" Oct 08 14:36:53 crc kubenswrapper[4624]: I1008 14:36:53.458665 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tndt\" (UniqueName: \"kubernetes.io/projected/04f3cc2a-3cee-47d9-bb95-6579966e6064-kube-api-access-9tndt\") pod \"certified-operators-4shxn\" (UID: \"04f3cc2a-3cee-47d9-bb95-6579966e6064\") " pod="openshift-marketplace/certified-operators-4shxn" Oct 08 14:36:53 crc kubenswrapper[4624]: I1008 14:36:53.458718 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04f3cc2a-3cee-47d9-bb95-6579966e6064-catalog-content\") pod \"certified-operators-4shxn\" (UID: \"04f3cc2a-3cee-47d9-bb95-6579966e6064\") " pod="openshift-marketplace/certified-operators-4shxn" Oct 08 14:36:53 crc kubenswrapper[4624]: I1008 14:36:53.560066 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04f3cc2a-3cee-47d9-bb95-6579966e6064-catalog-content\") pod \"certified-operators-4shxn\" (UID: \"04f3cc2a-3cee-47d9-bb95-6579966e6064\") " pod="openshift-marketplace/certified-operators-4shxn" Oct 08 14:36:53 crc kubenswrapper[4624]: I1008 14:36:53.560375 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04f3cc2a-3cee-47d9-bb95-6579966e6064-utilities\") pod \"certified-operators-4shxn\" (UID: \"04f3cc2a-3cee-47d9-bb95-6579966e6064\") " pod="openshift-marketplace/certified-operators-4shxn" Oct 08 14:36:53 crc kubenswrapper[4624]: I1008 14:36:53.560457 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tndt\" (UniqueName: \"kubernetes.io/projected/04f3cc2a-3cee-47d9-bb95-6579966e6064-kube-api-access-9tndt\") pod \"certified-operators-4shxn\" (UID: \"04f3cc2a-3cee-47d9-bb95-6579966e6064\") " pod="openshift-marketplace/certified-operators-4shxn" Oct 08 14:36:53 crc kubenswrapper[4624]: I1008 14:36:53.560560 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04f3cc2a-3cee-47d9-bb95-6579966e6064-catalog-content\") pod \"certified-operators-4shxn\" (UID: \"04f3cc2a-3cee-47d9-bb95-6579966e6064\") " pod="openshift-marketplace/certified-operators-4shxn" Oct 08 14:36:53 crc kubenswrapper[4624]: I1008 14:36:53.561042 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04f3cc2a-3cee-47d9-bb95-6579966e6064-utilities\") pod \"certified-operators-4shxn\" (UID: \"04f3cc2a-3cee-47d9-bb95-6579966e6064\") " pod="openshift-marketplace/certified-operators-4shxn" Oct 08 14:36:53 crc kubenswrapper[4624]: I1008 14:36:53.579730 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tndt\" (UniqueName: \"kubernetes.io/projected/04f3cc2a-3cee-47d9-bb95-6579966e6064-kube-api-access-9tndt\") pod \"certified-operators-4shxn\" (UID: \"04f3cc2a-3cee-47d9-bb95-6579966e6064\") " pod="openshift-marketplace/certified-operators-4shxn" Oct 08 14:36:53 crc kubenswrapper[4624]: I1008 14:36:53.730575 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4shxn" Oct 08 14:36:54 crc kubenswrapper[4624]: I1008 14:36:54.275777 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4shxn"] Oct 08 14:36:54 crc kubenswrapper[4624]: W1008 14:36:54.280317 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04f3cc2a_3cee_47d9_bb95_6579966e6064.slice/crio-88da51b109fbfba7e951fb920215210503f3aeff656ca66bc912d14ce38acf24 WatchSource:0}: Error finding container 88da51b109fbfba7e951fb920215210503f3aeff656ca66bc912d14ce38acf24: Status 404 returned error can't find the container with id 88da51b109fbfba7e951fb920215210503f3aeff656ca66bc912d14ce38acf24 Oct 08 14:36:54 crc kubenswrapper[4624]: I1008 14:36:54.921460 4624 generic.go:334] "Generic (PLEG): container finished" podID="04f3cc2a-3cee-47d9-bb95-6579966e6064" containerID="8799ead9118a324ab33dfbbadcd9f8b973cf5f5369495921da880d0c410a141f" exitCode=0 Oct 08 14:36:54 crc kubenswrapper[4624]: I1008 14:36:54.921560 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4shxn" event={"ID":"04f3cc2a-3cee-47d9-bb95-6579966e6064","Type":"ContainerDied","Data":"8799ead9118a324ab33dfbbadcd9f8b973cf5f5369495921da880d0c410a141f"} Oct 08 14:36:54 crc kubenswrapper[4624]: I1008 14:36:54.921816 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4shxn" event={"ID":"04f3cc2a-3cee-47d9-bb95-6579966e6064","Type":"ContainerStarted","Data":"88da51b109fbfba7e951fb920215210503f3aeff656ca66bc912d14ce38acf24"} Oct 08 14:36:55 crc kubenswrapper[4624]: I1008 14:36:55.729711 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-74fc58d4cc-drvtb"] Oct 08 14:36:55 crc kubenswrapper[4624]: I1008 14:36:55.731203 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-74fc58d4cc-drvtb" Oct 08 14:36:55 crc kubenswrapper[4624]: I1008 14:36:55.735936 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-6bc5n" Oct 08 14:36:55 crc kubenswrapper[4624]: I1008 14:36:55.769466 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-74fc58d4cc-drvtb"] Oct 08 14:36:55 crc kubenswrapper[4624]: I1008 14:36:55.787822 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s4gh\" (UniqueName: \"kubernetes.io/projected/d7a5f232-4275-4277-8ec9-112eaadf6f4d-kube-api-access-6s4gh\") pod \"openstack-operator-controller-operator-74fc58d4cc-drvtb\" (UID: \"d7a5f232-4275-4277-8ec9-112eaadf6f4d\") " pod="openstack-operators/openstack-operator-controller-operator-74fc58d4cc-drvtb" Oct 08 14:36:55 crc kubenswrapper[4624]: I1008 14:36:55.888537 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s4gh\" (UniqueName: \"kubernetes.io/projected/d7a5f232-4275-4277-8ec9-112eaadf6f4d-kube-api-access-6s4gh\") pod \"openstack-operator-controller-operator-74fc58d4cc-drvtb\" (UID: \"d7a5f232-4275-4277-8ec9-112eaadf6f4d\") " pod="openstack-operators/openstack-operator-controller-operator-74fc58d4cc-drvtb" Oct 08 14:36:55 crc kubenswrapper[4624]: I1008 14:36:55.910983 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s4gh\" (UniqueName: \"kubernetes.io/projected/d7a5f232-4275-4277-8ec9-112eaadf6f4d-kube-api-access-6s4gh\") pod \"openstack-operator-controller-operator-74fc58d4cc-drvtb\" (UID: \"d7a5f232-4275-4277-8ec9-112eaadf6f4d\") " pod="openstack-operators/openstack-operator-controller-operator-74fc58d4cc-drvtb" Oct 08 14:36:56 crc kubenswrapper[4624]: I1008 14:36:56.058988 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-74fc58d4cc-drvtb" Oct 08 14:36:56 crc kubenswrapper[4624]: I1008 14:36:56.394351 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-74fc58d4cc-drvtb"] Oct 08 14:36:56 crc kubenswrapper[4624]: W1008 14:36:56.404960 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7a5f232_4275_4277_8ec9_112eaadf6f4d.slice/crio-d5c6083d6de9d16bda76601a6e817859e586d38a7901ec8d017516a3362cb4d9 WatchSource:0}: Error finding container d5c6083d6de9d16bda76601a6e817859e586d38a7901ec8d017516a3362cb4d9: Status 404 returned error can't find the container with id d5c6083d6de9d16bda76601a6e817859e586d38a7901ec8d017516a3362cb4d9 Oct 08 14:36:56 crc kubenswrapper[4624]: I1008 14:36:56.942711 4624 generic.go:334] "Generic (PLEG): container finished" podID="04f3cc2a-3cee-47d9-bb95-6579966e6064" containerID="f2aed4e323638f301ab94d38a95787549ea834fcfa008029ebfe7b7e7766dffc" exitCode=0 Oct 08 14:36:56 crc kubenswrapper[4624]: I1008 14:36:56.942810 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4shxn" event={"ID":"04f3cc2a-3cee-47d9-bb95-6579966e6064","Type":"ContainerDied","Data":"f2aed4e323638f301ab94d38a95787549ea834fcfa008029ebfe7b7e7766dffc"} Oct 08 14:36:56 crc kubenswrapper[4624]: I1008 14:36:56.965149 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-74fc58d4cc-drvtb" event={"ID":"d7a5f232-4275-4277-8ec9-112eaadf6f4d","Type":"ContainerStarted","Data":"d5c6083d6de9d16bda76601a6e817859e586d38a7901ec8d017516a3362cb4d9"} Oct 08 14:36:57 crc kubenswrapper[4624]: I1008 14:36:57.990374 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4shxn" event={"ID":"04f3cc2a-3cee-47d9-bb95-6579966e6064","Type":"ContainerStarted","Data":"5bda079cd194ac8d66ca81885b6ec1d38e9bc12e3656d143b45c3bc066bd2f82"} Oct 08 14:36:58 crc kubenswrapper[4624]: I1008 14:36:58.043363 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4shxn" podStartSLOduration=2.5317176789999998 podStartE2EDuration="5.043337769s" podCreationTimestamp="2025-10-08 14:36:53 +0000 UTC" firstStartedPulling="2025-10-08 14:36:54.922909972 +0000 UTC m=+840.073845039" lastFinishedPulling="2025-10-08 14:36:57.434530052 +0000 UTC m=+842.585465129" observedRunningTime="2025-10-08 14:36:58.037622612 +0000 UTC m=+843.188557689" watchObservedRunningTime="2025-10-08 14:36:58.043337769 +0000 UTC m=+843.194272846" Oct 08 14:37:02 crc kubenswrapper[4624]: I1008 14:37:02.044089 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-74fc58d4cc-drvtb" event={"ID":"d7a5f232-4275-4277-8ec9-112eaadf6f4d","Type":"ContainerStarted","Data":"3fc6a5c0828b4b131577045f8e966f03f45a4cfcbda2dcbd41fd46603e320cd8"} Oct 08 14:37:03 crc kubenswrapper[4624]: I1008 14:37:03.731263 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4shxn" Oct 08 14:37:03 crc kubenswrapper[4624]: I1008 14:37:03.731613 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4shxn" Oct 08 14:37:03 crc kubenswrapper[4624]: I1008 14:37:03.775080 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4shxn" Oct 08 14:37:04 crc kubenswrapper[4624]: I1008 14:37:04.095541 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4shxn" Oct 08 14:37:06 crc kubenswrapper[4624]: I1008 14:37:06.187359 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4shxn"] Oct 08 14:37:06 crc kubenswrapper[4624]: I1008 14:37:06.188033 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4shxn" podUID="04f3cc2a-3cee-47d9-bb95-6579966e6064" containerName="registry-server" containerID="cri-o://5bda079cd194ac8d66ca81885b6ec1d38e9bc12e3656d143b45c3bc066bd2f82" gracePeriod=2 Oct 08 14:37:07 crc kubenswrapper[4624]: I1008 14:37:07.075401 4624 generic.go:334] "Generic (PLEG): container finished" podID="04f3cc2a-3cee-47d9-bb95-6579966e6064" containerID="5bda079cd194ac8d66ca81885b6ec1d38e9bc12e3656d143b45c3bc066bd2f82" exitCode=0 Oct 08 14:37:07 crc kubenswrapper[4624]: I1008 14:37:07.075473 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4shxn" event={"ID":"04f3cc2a-3cee-47d9-bb95-6579966e6064","Type":"ContainerDied","Data":"5bda079cd194ac8d66ca81885b6ec1d38e9bc12e3656d143b45c3bc066bd2f82"} Oct 08 14:37:11 crc kubenswrapper[4624]: I1008 14:37:11.411239 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4shxn" Oct 08 14:37:11 crc kubenswrapper[4624]: I1008 14:37:11.521658 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04f3cc2a-3cee-47d9-bb95-6579966e6064-catalog-content\") pod \"04f3cc2a-3cee-47d9-bb95-6579966e6064\" (UID: \"04f3cc2a-3cee-47d9-bb95-6579966e6064\") " Oct 08 14:37:11 crc kubenswrapper[4624]: I1008 14:37:11.521952 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tndt\" (UniqueName: \"kubernetes.io/projected/04f3cc2a-3cee-47d9-bb95-6579966e6064-kube-api-access-9tndt\") pod \"04f3cc2a-3cee-47d9-bb95-6579966e6064\" (UID: \"04f3cc2a-3cee-47d9-bb95-6579966e6064\") " Oct 08 14:37:11 crc kubenswrapper[4624]: I1008 14:37:11.521981 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04f3cc2a-3cee-47d9-bb95-6579966e6064-utilities\") pod \"04f3cc2a-3cee-47d9-bb95-6579966e6064\" (UID: \"04f3cc2a-3cee-47d9-bb95-6579966e6064\") " Oct 08 14:37:11 crc kubenswrapper[4624]: I1008 14:37:11.522820 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04f3cc2a-3cee-47d9-bb95-6579966e6064-utilities" (OuterVolumeSpecName: "utilities") pod "04f3cc2a-3cee-47d9-bb95-6579966e6064" (UID: "04f3cc2a-3cee-47d9-bb95-6579966e6064"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:37:11 crc kubenswrapper[4624]: I1008 14:37:11.534759 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04f3cc2a-3cee-47d9-bb95-6579966e6064-kube-api-access-9tndt" (OuterVolumeSpecName: "kube-api-access-9tndt") pod "04f3cc2a-3cee-47d9-bb95-6579966e6064" (UID: "04f3cc2a-3cee-47d9-bb95-6579966e6064"). InnerVolumeSpecName "kube-api-access-9tndt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:37:11 crc kubenswrapper[4624]: I1008 14:37:11.574925 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04f3cc2a-3cee-47d9-bb95-6579966e6064-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04f3cc2a-3cee-47d9-bb95-6579966e6064" (UID: "04f3cc2a-3cee-47d9-bb95-6579966e6064"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:37:11 crc kubenswrapper[4624]: I1008 14:37:11.623588 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04f3cc2a-3cee-47d9-bb95-6579966e6064-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:37:11 crc kubenswrapper[4624]: I1008 14:37:11.623667 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tndt\" (UniqueName: \"kubernetes.io/projected/04f3cc2a-3cee-47d9-bb95-6579966e6064-kube-api-access-9tndt\") on node \"crc\" DevicePath \"\"" Oct 08 14:37:11 crc kubenswrapper[4624]: I1008 14:37:11.623680 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04f3cc2a-3cee-47d9-bb95-6579966e6064-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:37:12 crc kubenswrapper[4624]: I1008 14:37:12.104718 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-74fc58d4cc-drvtb" event={"ID":"d7a5f232-4275-4277-8ec9-112eaadf6f4d","Type":"ContainerStarted","Data":"254254f6350d5e0bdcf75e6d7a8e6eb4d13c2b57d83c7675fa42b606b674c1b9"} Oct 08 14:37:12 crc kubenswrapper[4624]: I1008 14:37:12.105091 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-74fc58d4cc-drvtb" Oct 08 14:37:12 crc kubenswrapper[4624]: I1008 14:37:12.107144 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-74fc58d4cc-drvtb" Oct 08 14:37:12 crc kubenswrapper[4624]: I1008 14:37:12.107532 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4shxn" event={"ID":"04f3cc2a-3cee-47d9-bb95-6579966e6064","Type":"ContainerDied","Data":"88da51b109fbfba7e951fb920215210503f3aeff656ca66bc912d14ce38acf24"} Oct 08 14:37:12 crc kubenswrapper[4624]: I1008 14:37:12.107571 4624 scope.go:117] "RemoveContainer" containerID="5bda079cd194ac8d66ca81885b6ec1d38e9bc12e3656d143b45c3bc066bd2f82" Oct 08 14:37:12 crc kubenswrapper[4624]: I1008 14:37:12.107713 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4shxn" Oct 08 14:37:12 crc kubenswrapper[4624]: I1008 14:37:12.127003 4624 scope.go:117] "RemoveContainer" containerID="f2aed4e323638f301ab94d38a95787549ea834fcfa008029ebfe7b7e7766dffc" Oct 08 14:37:12 crc kubenswrapper[4624]: I1008 14:37:12.147738 4624 scope.go:117] "RemoveContainer" containerID="8799ead9118a324ab33dfbbadcd9f8b973cf5f5369495921da880d0c410a141f" Oct 08 14:37:12 crc kubenswrapper[4624]: I1008 14:37:12.157077 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-74fc58d4cc-drvtb" podStartSLOduration=2.127342627 podStartE2EDuration="17.157057226s" podCreationTimestamp="2025-10-08 14:36:55 +0000 UTC" firstStartedPulling="2025-10-08 14:36:56.40726938 +0000 UTC m=+841.558204457" lastFinishedPulling="2025-10-08 14:37:11.436983979 +0000 UTC m=+856.587919056" observedRunningTime="2025-10-08 14:37:12.151475763 +0000 UTC m=+857.302410850" watchObservedRunningTime="2025-10-08 14:37:12.157057226 +0000 UTC m=+857.307992303" Oct 08 14:37:12 crc kubenswrapper[4624]: I1008 14:37:12.178181 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4shxn"] Oct 08 14:37:12 crc kubenswrapper[4624]: I1008 14:37:12.181521 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4shxn"] Oct 08 14:37:13 crc kubenswrapper[4624]: I1008 14:37:13.473346 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04f3cc2a-3cee-47d9-bb95-6579966e6064" path="/var/lib/kubelet/pods/04f3cc2a-3cee-47d9-bb95-6579966e6064/volumes" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.673591 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64f84fcdbb-qncx7"] Oct 08 14:37:27 crc kubenswrapper[4624]: E1008 14:37:27.674362 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f3cc2a-3cee-47d9-bb95-6579966e6064" containerName="registry-server" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.674374 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f3cc2a-3cee-47d9-bb95-6579966e6064" containerName="registry-server" Oct 08 14:37:27 crc kubenswrapper[4624]: E1008 14:37:27.674388 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f3cc2a-3cee-47d9-bb95-6579966e6064" containerName="extract-utilities" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.674394 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f3cc2a-3cee-47d9-bb95-6579966e6064" containerName="extract-utilities" Oct 08 14:37:27 crc kubenswrapper[4624]: E1008 14:37:27.674405 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f3cc2a-3cee-47d9-bb95-6579966e6064" containerName="extract-content" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.674411 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f3cc2a-3cee-47d9-bb95-6579966e6064" containerName="extract-content" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.674523 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f3cc2a-3cee-47d9-bb95-6579966e6064" containerName="registry-server" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.675116 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-qncx7" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.677661 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-59cdc64769-rdwmx"] Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.678493 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-rdwmx" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.678776 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-qwkdn" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.681040 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-fdrq6" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.691466 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-59cdc64769-rdwmx"] Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.721494 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-687df44cdb-ntt7z"] Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.725262 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ntt7z" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.730520 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-5cfxm" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.764757 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-687df44cdb-ntt7z"] Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.810979 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7bb46cd7d-bxlpf"] Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.812039 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-bxlpf" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.815530 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-tmsz7" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.827709 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64f84fcdbb-qncx7"] Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.837587 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcp5w\" (UniqueName: \"kubernetes.io/projected/afff9e1e-6c7c-42b8-8099-6817f813ddb5-kube-api-access-vcp5w\") pod \"barbican-operator-controller-manager-64f84fcdbb-qncx7\" (UID: \"afff9e1e-6c7c-42b8-8099-6817f813ddb5\") " pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-qncx7" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.837680 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4tg5\" (UniqueName: \"kubernetes.io/projected/7e4bdb15-7f2c-4a03-8882-00312974ef50-kube-api-access-c4tg5\") pod \"cinder-operator-controller-manager-59cdc64769-rdwmx\" (UID: \"7e4bdb15-7f2c-4a03-8882-00312974ef50\") " pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-rdwmx" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.837717 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2wbh\" (UniqueName: \"kubernetes.io/projected/c1f040ec-db8e-42de-8d6c-7758f2f45ecc-kube-api-access-j2wbh\") pod \"designate-operator-controller-manager-687df44cdb-ntt7z\" (UID: \"c1f040ec-db8e-42de-8d6c-7758f2f45ecc\") " pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ntt7z" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.859656 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7bb46cd7d-bxlpf"] Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.875688 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-6d9967f8dd-zkxcm"] Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.876741 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-zkxcm" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.886341 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d74794d9b-5nntt"] Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.887956 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-5nntt" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.892394 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-dkbkd" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.892601 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-kv5m4" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.892715 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-6d9967f8dd-zkxcm"] Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.897714 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d74794d9b-5nntt"] Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.906588 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-74cb5cbc49-fnwtr"] Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.907482 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-fnwtr" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.926683 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd"] Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.927740 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.928970 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-6wfdr" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.935095 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-lr6cn" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.936313 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.936425 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-ddb98f99b-zljbd"] Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.943357 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd"] Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.943474 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-zljbd" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.949572 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-6c68t" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.951029 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-74cb5cbc49-fnwtr"] Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.951860 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b00e056-2cc0-4eb2-85f9-8fc7197dc67a-cert\") pod \"infra-operator-controller-manager-585fc5b659-bhvnd\" (UID: \"6b00e056-2cc0-4eb2-85f9-8fc7197dc67a\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.951893 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2wbh\" (UniqueName: \"kubernetes.io/projected/c1f040ec-db8e-42de-8d6c-7758f2f45ecc-kube-api-access-j2wbh\") pod \"designate-operator-controller-manager-687df44cdb-ntt7z\" (UID: \"c1f040ec-db8e-42de-8d6c-7758f2f45ecc\") " pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ntt7z" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.951926 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plp7q\" (UniqueName: \"kubernetes.io/projected/6b00e056-2cc0-4eb2-85f9-8fc7197dc67a-kube-api-access-plp7q\") pod \"infra-operator-controller-manager-585fc5b659-bhvnd\" (UID: \"6b00e056-2cc0-4eb2-85f9-8fc7197dc67a\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.951956 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwmdx\" (UniqueName: \"kubernetes.io/projected/d14acd1f-d497-4c71-8e2a-24c991118c01-kube-api-access-cwmdx\") pod \"glance-operator-controller-manager-7bb46cd7d-bxlpf\" (UID: \"d14acd1f-d497-4c71-8e2a-24c991118c01\") " pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-bxlpf" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.951981 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xxx9\" (UniqueName: \"kubernetes.io/projected/afba2503-8832-4a9f-8246-390f7ae79b71-kube-api-access-6xxx9\") pod \"keystone-operator-controller-manager-ddb98f99b-zljbd\" (UID: \"afba2503-8832-4a9f-8246-390f7ae79b71\") " pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-zljbd" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.952006 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85mrm\" (UniqueName: \"kubernetes.io/projected/0a8ab8f3-13b9-4f95-b540-ea49d2c5a261-kube-api-access-85mrm\") pod \"heat-operator-controller-manager-6d9967f8dd-zkxcm\" (UID: \"0a8ab8f3-13b9-4f95-b540-ea49d2c5a261\") " pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-zkxcm" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.952024 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfccq\" (UniqueName: \"kubernetes.io/projected/d957903e-a551-41a6-8360-9af30306414f-kube-api-access-tfccq\") pod \"ironic-operator-controller-manager-74cb5cbc49-fnwtr\" (UID: \"d957903e-a551-41a6-8360-9af30306414f\") " pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-fnwtr" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.952042 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcp5w\" (UniqueName: \"kubernetes.io/projected/afff9e1e-6c7c-42b8-8099-6817f813ddb5-kube-api-access-vcp5w\") pod \"barbican-operator-controller-manager-64f84fcdbb-qncx7\" (UID: \"afff9e1e-6c7c-42b8-8099-6817f813ddb5\") " pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-qncx7" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.952087 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4tg5\" (UniqueName: \"kubernetes.io/projected/7e4bdb15-7f2c-4a03-8882-00312974ef50-kube-api-access-c4tg5\") pod \"cinder-operator-controller-manager-59cdc64769-rdwmx\" (UID: \"7e4bdb15-7f2c-4a03-8882-00312974ef50\") " pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-rdwmx" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.958730 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-59578bc799-q7qf9"] Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.981548 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-59578bc799-q7qf9" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.987157 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-trmmc" Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.990694 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5777b4f897-fr9v7"] Oct 08 14:37:27 crc kubenswrapper[4624]: I1008 14:37:27.991802 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-fr9v7" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.005952 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-ddb98f99b-zljbd"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.010943 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2wbh\" (UniqueName: \"kubernetes.io/projected/c1f040ec-db8e-42de-8d6c-7758f2f45ecc-kube-api-access-j2wbh\") pod \"designate-operator-controller-manager-687df44cdb-ntt7z\" (UID: \"c1f040ec-db8e-42de-8d6c-7758f2f45ecc\") " pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ntt7z" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.010992 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-xv2hb" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.011426 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4tg5\" (UniqueName: \"kubernetes.io/projected/7e4bdb15-7f2c-4a03-8882-00312974ef50-kube-api-access-c4tg5\") pod \"cinder-operator-controller-manager-59cdc64769-rdwmx\" (UID: \"7e4bdb15-7f2c-4a03-8882-00312974ef50\") " pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-rdwmx" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.018918 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-rdwmx" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.021709 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-797d478b46-tg58g"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.022786 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-tg58g" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.025810 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-57bb74c7bf-8zzlr"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.032328 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-8zzlr" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.032898 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-vcscc" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.036675 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-797d478b46-tg58g"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.041199 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcp5w\" (UniqueName: \"kubernetes.io/projected/afff9e1e-6c7c-42b8-8099-6817f813ddb5-kube-api-access-vcp5w\") pod \"barbican-operator-controller-manager-64f84fcdbb-qncx7\" (UID: \"afff9e1e-6c7c-42b8-8099-6817f813ddb5\") " pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-qncx7" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.045667 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-847jc" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.056606 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-59578bc799-q7qf9"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.057240 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b00e056-2cc0-4eb2-85f9-8fc7197dc67a-cert\") pod \"infra-operator-controller-manager-585fc5b659-bhvnd\" (UID: \"6b00e056-2cc0-4eb2-85f9-8fc7197dc67a\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.057275 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plp7q\" (UniqueName: \"kubernetes.io/projected/6b00e056-2cc0-4eb2-85f9-8fc7197dc67a-kube-api-access-plp7q\") pod \"infra-operator-controller-manager-585fc5b659-bhvnd\" (UID: \"6b00e056-2cc0-4eb2-85f9-8fc7197dc67a\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.057305 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z77v9\" (UniqueName: \"kubernetes.io/projected/9dbf7483-c352-4fd8-b0e0-96acf41616b0-kube-api-access-z77v9\") pod \"horizon-operator-controller-manager-6d74794d9b-5nntt\" (UID: \"9dbf7483-c352-4fd8-b0e0-96acf41616b0\") " pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-5nntt" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.057328 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwmdx\" (UniqueName: \"kubernetes.io/projected/d14acd1f-d497-4c71-8e2a-24c991118c01-kube-api-access-cwmdx\") pod \"glance-operator-controller-manager-7bb46cd7d-bxlpf\" (UID: \"d14acd1f-d497-4c71-8e2a-24c991118c01\") " pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-bxlpf" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.057351 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xxx9\" (UniqueName: \"kubernetes.io/projected/afba2503-8832-4a9f-8246-390f7ae79b71-kube-api-access-6xxx9\") pod \"keystone-operator-controller-manager-ddb98f99b-zljbd\" (UID: \"afba2503-8832-4a9f-8246-390f7ae79b71\") " pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-zljbd" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.057375 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfccq\" (UniqueName: \"kubernetes.io/projected/d957903e-a551-41a6-8360-9af30306414f-kube-api-access-tfccq\") pod \"ironic-operator-controller-manager-74cb5cbc49-fnwtr\" (UID: \"d957903e-a551-41a6-8360-9af30306414f\") " pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-fnwtr" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.057391 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85mrm\" (UniqueName: \"kubernetes.io/projected/0a8ab8f3-13b9-4f95-b540-ea49d2c5a261-kube-api-access-85mrm\") pod \"heat-operator-controller-manager-6d9967f8dd-zkxcm\" (UID: \"0a8ab8f3-13b9-4f95-b540-ea49d2c5a261\") " pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-zkxcm" Oct 08 14:37:28 crc kubenswrapper[4624]: E1008 14:37:28.058227 4624 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 08 14:37:28 crc kubenswrapper[4624]: E1008 14:37:28.058275 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b00e056-2cc0-4eb2-85f9-8fc7197dc67a-cert podName:6b00e056-2cc0-4eb2-85f9-8fc7197dc67a nodeName:}" failed. No retries permitted until 2025-10-08 14:37:28.558259987 +0000 UTC m=+873.709195064 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b00e056-2cc0-4eb2-85f9-8fc7197dc67a-cert") pod "infra-operator-controller-manager-585fc5b659-bhvnd" (UID: "6b00e056-2cc0-4eb2-85f9-8fc7197dc67a") : secret "infra-operator-webhook-server-cert" not found Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.077943 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ntt7z" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.102316 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5777b4f897-fr9v7"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.142357 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plp7q\" (UniqueName: \"kubernetes.io/projected/6b00e056-2cc0-4eb2-85f9-8fc7197dc67a-kube-api-access-plp7q\") pod \"infra-operator-controller-manager-585fc5b659-bhvnd\" (UID: \"6b00e056-2cc0-4eb2-85f9-8fc7197dc67a\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.142393 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfccq\" (UniqueName: \"kubernetes.io/projected/d957903e-a551-41a6-8360-9af30306414f-kube-api-access-tfccq\") pod \"ironic-operator-controller-manager-74cb5cbc49-fnwtr\" (UID: \"d957903e-a551-41a6-8360-9af30306414f\") " pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-fnwtr" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.146864 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85mrm\" (UniqueName: \"kubernetes.io/projected/0a8ab8f3-13b9-4f95-b540-ea49d2c5a261-kube-api-access-85mrm\") pod \"heat-operator-controller-manager-6d9967f8dd-zkxcm\" (UID: \"0a8ab8f3-13b9-4f95-b540-ea49d2c5a261\") " pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-zkxcm" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.146926 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xxx9\" (UniqueName: \"kubernetes.io/projected/afba2503-8832-4a9f-8246-390f7ae79b71-kube-api-access-6xxx9\") pod \"keystone-operator-controller-manager-ddb98f99b-zljbd\" (UID: \"afba2503-8832-4a9f-8246-390f7ae79b71\") " pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-zljbd" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.147603 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwmdx\" (UniqueName: \"kubernetes.io/projected/d14acd1f-d497-4c71-8e2a-24c991118c01-kube-api-access-cwmdx\") pod \"glance-operator-controller-manager-7bb46cd7d-bxlpf\" (UID: \"d14acd1f-d497-4c71-8e2a-24c991118c01\") " pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-bxlpf" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.151924 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-bxlpf" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.159202 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvjtg\" (UniqueName: \"kubernetes.io/projected/84583328-8cef-49aa-b812-6f550d1dd71f-kube-api-access-bvjtg\") pod \"nova-operator-controller-manager-57bb74c7bf-8zzlr\" (UID: \"84583328-8cef-49aa-b812-6f550d1dd71f\") " pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-8zzlr" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.159321 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z77v9\" (UniqueName: \"kubernetes.io/projected/9dbf7483-c352-4fd8-b0e0-96acf41616b0-kube-api-access-z77v9\") pod \"horizon-operator-controller-manager-6d74794d9b-5nntt\" (UID: \"9dbf7483-c352-4fd8-b0e0-96acf41616b0\") " pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-5nntt" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.159373 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfdxt\" (UniqueName: \"kubernetes.io/projected/c23e992e-ee8f-4d3a-9a3f-9f35a77f60fe-kube-api-access-tfdxt\") pod \"manila-operator-controller-manager-59578bc799-q7qf9\" (UID: \"c23e992e-ee8f-4d3a-9a3f-9f35a77f60fe\") " pod="openstack-operators/manila-operator-controller-manager-59578bc799-q7qf9" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.159396 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g7zp\" (UniqueName: \"kubernetes.io/projected/25e6b130-d820-475c-aae6-bed0dfbd0d0f-kube-api-access-7g7zp\") pod \"neutron-operator-controller-manager-797d478b46-tg58g\" (UID: \"25e6b130-d820-475c-aae6-bed0dfbd0d0f\") " pod="openstack-operators/neutron-operator-controller-manager-797d478b46-tg58g" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.159414 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfwvt\" (UniqueName: \"kubernetes.io/projected/5e88daf4-d403-4ba8-827f-a9972c5e40bf-kube-api-access-cfwvt\") pod \"mariadb-operator-controller-manager-5777b4f897-fr9v7\" (UID: \"5e88daf4-d403-4ba8-827f-a9972c5e40bf\") " pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-fr9v7" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.172703 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-hm59s"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.173762 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-hm59s" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.179090 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-8cr8c" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.209898 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-zkxcm" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.222382 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.223487 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.241229 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.241517 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-t7s96" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.242110 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z77v9\" (UniqueName: \"kubernetes.io/projected/9dbf7483-c352-4fd8-b0e0-96acf41616b0-kube-api-access-z77v9\") pod \"horizon-operator-controller-manager-6d74794d9b-5nntt\" (UID: \"9dbf7483-c352-4fd8-b0e0-96acf41616b0\") " pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-5nntt" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.242158 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-hm59s"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.242423 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-5nntt" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.245150 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-fnwtr" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.260605 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-57bb74c7bf-8zzlr"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.261491 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4fff91a-a1f8-4e66-9955-005bfa78dfe6-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd\" (UID: \"f4fff91a-a1f8-4e66-9955-005bfa78dfe6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.261531 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfdxt\" (UniqueName: \"kubernetes.io/projected/c23e992e-ee8f-4d3a-9a3f-9f35a77f60fe-kube-api-access-tfdxt\") pod \"manila-operator-controller-manager-59578bc799-q7qf9\" (UID: \"c23e992e-ee8f-4d3a-9a3f-9f35a77f60fe\") " pod="openstack-operators/manila-operator-controller-manager-59578bc799-q7qf9" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.261556 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmhft\" (UniqueName: \"kubernetes.io/projected/46bfb8aa-1ae5-43b4-88e9-5175655832aa-kube-api-access-qmhft\") pod \"octavia-operator-controller-manager-6d7c7ddf95-hm59s\" (UID: \"46bfb8aa-1ae5-43b4-88e9-5175655832aa\") " pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-hm59s" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.261572 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g7zp\" (UniqueName: \"kubernetes.io/projected/25e6b130-d820-475c-aae6-bed0dfbd0d0f-kube-api-access-7g7zp\") pod \"neutron-operator-controller-manager-797d478b46-tg58g\" (UID: \"25e6b130-d820-475c-aae6-bed0dfbd0d0f\") " pod="openstack-operators/neutron-operator-controller-manager-797d478b46-tg58g" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.261589 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfwvt\" (UniqueName: \"kubernetes.io/projected/5e88daf4-d403-4ba8-827f-a9972c5e40bf-kube-api-access-cfwvt\") pod \"mariadb-operator-controller-manager-5777b4f897-fr9v7\" (UID: \"5e88daf4-d403-4ba8-827f-a9972c5e40bf\") " pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-fr9v7" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.261620 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvjtg\" (UniqueName: \"kubernetes.io/projected/84583328-8cef-49aa-b812-6f550d1dd71f-kube-api-access-bvjtg\") pod \"nova-operator-controller-manager-57bb74c7bf-8zzlr\" (UID: \"84583328-8cef-49aa-b812-6f550d1dd71f\") " pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-8zzlr" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.261669 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnnwb\" (UniqueName: \"kubernetes.io/projected/f4fff91a-a1f8-4e66-9955-005bfa78dfe6-kube-api-access-lnnwb\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd\" (UID: \"f4fff91a-a1f8-4e66-9955-005bfa78dfe6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.281929 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.304387 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-qncx7" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.306917 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f96f8c84-s6lmj"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.309037 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f96f8c84-s6lmj" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.311562 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-zljbd" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.313860 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfwvt\" (UniqueName: \"kubernetes.io/projected/5e88daf4-d403-4ba8-827f-a9972c5e40bf-kube-api-access-cfwvt\") pod \"mariadb-operator-controller-manager-5777b4f897-fr9v7\" (UID: \"5e88daf4-d403-4ba8-827f-a9972c5e40bf\") " pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-fr9v7" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.315070 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-mkzgg" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.340296 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g7zp\" (UniqueName: \"kubernetes.io/projected/25e6b130-d820-475c-aae6-bed0dfbd0d0f-kube-api-access-7g7zp\") pod \"neutron-operator-controller-manager-797d478b46-tg58g\" (UID: \"25e6b130-d820-475c-aae6-bed0dfbd0d0f\") " pod="openstack-operators/neutron-operator-controller-manager-797d478b46-tg58g" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.376318 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvjtg\" (UniqueName: \"kubernetes.io/projected/84583328-8cef-49aa-b812-6f550d1dd71f-kube-api-access-bvjtg\") pod \"nova-operator-controller-manager-57bb74c7bf-8zzlr\" (UID: \"84583328-8cef-49aa-b812-6f550d1dd71f\") " pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-8zzlr" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.380987 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfdxt\" (UniqueName: \"kubernetes.io/projected/c23e992e-ee8f-4d3a-9a3f-9f35a77f60fe-kube-api-access-tfdxt\") pod \"manila-operator-controller-manager-59578bc799-q7qf9\" (UID: \"c23e992e-ee8f-4d3a-9a3f-9f35a77f60fe\") " pod="openstack-operators/manila-operator-controller-manager-59578bc799-q7qf9" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.382036 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-664664cb68-5bwth"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.384226 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-664664cb68-5bwth" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.392964 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4fff91a-a1f8-4e66-9955-005bfa78dfe6-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd\" (UID: \"f4fff91a-a1f8-4e66-9955-005bfa78dfe6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.393025 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvdvm\" (UniqueName: \"kubernetes.io/projected/657a98dc-f421-4483-9354-28eeb59bb8a0-kube-api-access-tvdvm\") pod \"ovn-operator-controller-manager-6f96f8c84-s6lmj\" (UID: \"657a98dc-f421-4483-9354-28eeb59bb8a0\") " pod="openstack-operators/ovn-operator-controller-manager-6f96f8c84-s6lmj" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.393057 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmhft\" (UniqueName: \"kubernetes.io/projected/46bfb8aa-1ae5-43b4-88e9-5175655832aa-kube-api-access-qmhft\") pod \"octavia-operator-controller-manager-6d7c7ddf95-hm59s\" (UID: \"46bfb8aa-1ae5-43b4-88e9-5175655832aa\") " pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-hm59s" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.393125 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnnwb\" (UniqueName: \"kubernetes.io/projected/f4fff91a-a1f8-4e66-9955-005bfa78dfe6-kube-api-access-lnnwb\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd\" (UID: \"f4fff91a-a1f8-4e66-9955-005bfa78dfe6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd" Oct 08 14:37:28 crc kubenswrapper[4624]: E1008 14:37:28.393547 4624 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 08 14:37:28 crc kubenswrapper[4624]: E1008 14:37:28.393628 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4fff91a-a1f8-4e66-9955-005bfa78dfe6-cert podName:f4fff91a-a1f8-4e66-9955-005bfa78dfe6 nodeName:}" failed. No retries permitted until 2025-10-08 14:37:28.893611885 +0000 UTC m=+874.044546962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f4fff91a-a1f8-4e66-9955-005bfa78dfe6-cert") pod "openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd" (UID: "f4fff91a-a1f8-4e66-9955-005bfa78dfe6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.403050 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-tfgqz" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.464240 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnnwb\" (UniqueName: \"kubernetes.io/projected/f4fff91a-a1f8-4e66-9955-005bfa78dfe6-kube-api-access-lnnwb\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd\" (UID: \"f4fff91a-a1f8-4e66-9955-005bfa78dfe6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.468247 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmhft\" (UniqueName: \"kubernetes.io/projected/46bfb8aa-1ae5-43b4-88e9-5175655832aa-kube-api-access-qmhft\") pod \"octavia-operator-controller-manager-6d7c7ddf95-hm59s\" (UID: \"46bfb8aa-1ae5-43b4-88e9-5175655832aa\") " pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-hm59s" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.474714 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vzbcj"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.506377 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvdvm\" (UniqueName: \"kubernetes.io/projected/657a98dc-f421-4483-9354-28eeb59bb8a0-kube-api-access-tvdvm\") pod \"ovn-operator-controller-manager-6f96f8c84-s6lmj\" (UID: \"657a98dc-f421-4483-9354-28eeb59bb8a0\") " pod="openstack-operators/ovn-operator-controller-manager-6f96f8c84-s6lmj" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.507913 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vzbcj" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.509673 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-fr9v7" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.516036 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-59578bc799-q7qf9" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.516679 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-h8tzk" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.567909 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-tg58g" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.568123 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f96f8c84-s6lmj"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.579397 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvdvm\" (UniqueName: \"kubernetes.io/projected/657a98dc-f421-4483-9354-28eeb59bb8a0-kube-api-access-tvdvm\") pod \"ovn-operator-controller-manager-6f96f8c84-s6lmj\" (UID: \"657a98dc-f421-4483-9354-28eeb59bb8a0\") " pod="openstack-operators/ovn-operator-controller-manager-6f96f8c84-s6lmj" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.586118 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-8zzlr" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.599414 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-664664cb68-5bwth"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.607523 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvhjh\" (UniqueName: \"kubernetes.io/projected/4fed85c8-e5c1-40db-9799-64b1705f9d86-kube-api-access-gvhjh\") pod \"swift-operator-controller-manager-5f4d5dfdc6-vzbcj\" (UID: \"4fed85c8-e5c1-40db-9799-64b1705f9d86\") " pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vzbcj" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.607569 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8p9d\" (UniqueName: \"kubernetes.io/projected/2e68c8fe-365a-4d66-bbbd-0cac98993f72-kube-api-access-n8p9d\") pod \"placement-operator-controller-manager-664664cb68-5bwth\" (UID: \"2e68c8fe-365a-4d66-bbbd-0cac98993f72\") " pod="openstack-operators/placement-operator-controller-manager-664664cb68-5bwth" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.607629 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b00e056-2cc0-4eb2-85f9-8fc7197dc67a-cert\") pod \"infra-operator-controller-manager-585fc5b659-bhvnd\" (UID: \"6b00e056-2cc0-4eb2-85f9-8fc7197dc67a\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" Oct 08 14:37:28 crc kubenswrapper[4624]: E1008 14:37:28.607804 4624 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 08 14:37:28 crc kubenswrapper[4624]: E1008 14:37:28.607850 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b00e056-2cc0-4eb2-85f9-8fc7197dc67a-cert podName:6b00e056-2cc0-4eb2-85f9-8fc7197dc67a nodeName:}" failed. No retries permitted until 2025-10-08 14:37:29.607835713 +0000 UTC m=+874.758770790 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b00e056-2cc0-4eb2-85f9-8fc7197dc67a-cert") pod "infra-operator-controller-manager-585fc5b659-bhvnd" (UID: "6b00e056-2cc0-4eb2-85f9-8fc7197dc67a") : secret "infra-operator-webhook-server-cert" not found Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.610315 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-hm59s" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.623911 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vzbcj"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.664960 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-74665f6cdc-wklj4"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.666093 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-74665f6cdc-wklj4" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.669090 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-xt62m" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.700517 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f96f8c84-s6lmj" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.708629 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-775776c574-m7brx"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.713905 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvhjh\" (UniqueName: \"kubernetes.io/projected/4fed85c8-e5c1-40db-9799-64b1705f9d86-kube-api-access-gvhjh\") pod \"swift-operator-controller-manager-5f4d5dfdc6-vzbcj\" (UID: \"4fed85c8-e5c1-40db-9799-64b1705f9d86\") " pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vzbcj" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.723837 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8p9d\" (UniqueName: \"kubernetes.io/projected/2e68c8fe-365a-4d66-bbbd-0cac98993f72-kube-api-access-n8p9d\") pod \"placement-operator-controller-manager-664664cb68-5bwth\" (UID: \"2e68c8fe-365a-4d66-bbbd-0cac98993f72\") " pod="openstack-operators/placement-operator-controller-manager-664664cb68-5bwth" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.736441 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-775776c574-m7brx" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.745020 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jd7sd" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.746838 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-74665f6cdc-wklj4"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.776987 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8p9d\" (UniqueName: \"kubernetes.io/projected/2e68c8fe-365a-4d66-bbbd-0cac98993f72-kube-api-access-n8p9d\") pod \"placement-operator-controller-manager-664664cb68-5bwth\" (UID: \"2e68c8fe-365a-4d66-bbbd-0cac98993f72\") " pod="openstack-operators/placement-operator-controller-manager-664664cb68-5bwth" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.783536 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-775776c574-m7brx"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.802288 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvhjh\" (UniqueName: \"kubernetes.io/projected/4fed85c8-e5c1-40db-9799-64b1705f9d86-kube-api-access-gvhjh\") pod \"swift-operator-controller-manager-5f4d5dfdc6-vzbcj\" (UID: \"4fed85c8-e5c1-40db-9799-64b1705f9d86\") " pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vzbcj" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.807876 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.809110 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.818973 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-nnrkm" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.838657 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlft4\" (UniqueName: \"kubernetes.io/projected/de7ff9ef-39f5-4521-856d-28c2665e7893-kube-api-access-rlft4\") pod \"test-operator-controller-manager-74665f6cdc-wklj4\" (UID: \"de7ff9ef-39f5-4521-856d-28c2665e7893\") " pod="openstack-operators/test-operator-controller-manager-74665f6cdc-wklj4" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.838694 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44tgv\" (UniqueName: \"kubernetes.io/projected/2f4ed0ab-4cb6-4874-8c0d-cce4fdbfa7c9-kube-api-access-44tgv\") pod \"telemetry-operator-controller-manager-775776c574-m7brx\" (UID: \"2f4ed0ab-4cb6-4874-8c0d-cce4fdbfa7c9\") " pod="openstack-operators/telemetry-operator-controller-manager-775776c574-m7brx" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.880963 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vzbcj" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.884086 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt"] Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.960357 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4fff91a-a1f8-4e66-9955-005bfa78dfe6-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd\" (UID: \"f4fff91a-a1f8-4e66-9955-005bfa78dfe6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.960440 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp6n7\" (UniqueName: \"kubernetes.io/projected/60b85252-6e34-43c5-a048-52fe105f2f93-kube-api-access-pp6n7\") pod \"watcher-operator-controller-manager-5dd4499c96-l4cbt\" (UID: \"60b85252-6e34-43c5-a048-52fe105f2f93\") " pod="openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.960491 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlft4\" (UniqueName: \"kubernetes.io/projected/de7ff9ef-39f5-4521-856d-28c2665e7893-kube-api-access-rlft4\") pod \"test-operator-controller-manager-74665f6cdc-wklj4\" (UID: \"de7ff9ef-39f5-4521-856d-28c2665e7893\") " pod="openstack-operators/test-operator-controller-manager-74665f6cdc-wklj4" Oct 08 14:37:28 crc kubenswrapper[4624]: I1008 14:37:28.960522 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44tgv\" (UniqueName: \"kubernetes.io/projected/2f4ed0ab-4cb6-4874-8c0d-cce4fdbfa7c9-kube-api-access-44tgv\") pod \"telemetry-operator-controller-manager-775776c574-m7brx\" (UID: \"2f4ed0ab-4cb6-4874-8c0d-cce4fdbfa7c9\") " pod="openstack-operators/telemetry-operator-controller-manager-775776c574-m7brx" Oct 08 14:37:28 crc kubenswrapper[4624]: E1008 14:37:28.960996 4624 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 08 14:37:28 crc kubenswrapper[4624]: E1008 14:37:28.961049 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4fff91a-a1f8-4e66-9955-005bfa78dfe6-cert podName:f4fff91a-a1f8-4e66-9955-005bfa78dfe6 nodeName:}" failed. No retries permitted until 2025-10-08 14:37:29.961032647 +0000 UTC m=+875.111967714 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f4fff91a-a1f8-4e66-9955-005bfa78dfe6-cert") pod "openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd" (UID: "f4fff91a-a1f8-4e66-9955-005bfa78dfe6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.002620 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7984bdc97c-5nw99"] Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.019225 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7984bdc97c-5nw99" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.025335 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.034253 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-sqqf4" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.037607 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7984bdc97c-5nw99"] Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.042856 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-664664cb68-5bwth" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.042960 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlft4\" (UniqueName: \"kubernetes.io/projected/de7ff9ef-39f5-4521-856d-28c2665e7893-kube-api-access-rlft4\") pod \"test-operator-controller-manager-74665f6cdc-wklj4\" (UID: \"de7ff9ef-39f5-4521-856d-28c2665e7893\") " pod="openstack-operators/test-operator-controller-manager-74665f6cdc-wklj4" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.063512 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp6n7\" (UniqueName: \"kubernetes.io/projected/60b85252-6e34-43c5-a048-52fe105f2f93-kube-api-access-pp6n7\") pod \"watcher-operator-controller-manager-5dd4499c96-l4cbt\" (UID: \"60b85252-6e34-43c5-a048-52fe105f2f93\") " pod="openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.075275 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44tgv\" (UniqueName: \"kubernetes.io/projected/2f4ed0ab-4cb6-4874-8c0d-cce4fdbfa7c9-kube-api-access-44tgv\") pod \"telemetry-operator-controller-manager-775776c574-m7brx\" (UID: \"2f4ed0ab-4cb6-4874-8c0d-cce4fdbfa7c9\") " pod="openstack-operators/telemetry-operator-controller-manager-775776c574-m7brx" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.075352 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv"] Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.076215 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.080079 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-r8qtq" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.084153 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-775776c574-m7brx" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.095214 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv"] Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.129568 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp6n7\" (UniqueName: \"kubernetes.io/projected/60b85252-6e34-43c5-a048-52fe105f2f93-kube-api-access-pp6n7\") pod \"watcher-operator-controller-manager-5dd4499c96-l4cbt\" (UID: \"60b85252-6e34-43c5-a048-52fe105f2f93\") " pod="openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.133106 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-59cdc64769-rdwmx"] Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.165943 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tfsp\" (UniqueName: \"kubernetes.io/projected/3b136221-5f6b-451f-807a-5b66f856daa4-kube-api-access-5tfsp\") pod \"openstack-operator-controller-manager-7984bdc97c-5nw99\" (UID: \"3b136221-5f6b-451f-807a-5b66f856daa4\") " pod="openstack-operators/openstack-operator-controller-manager-7984bdc97c-5nw99" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.166072 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b136221-5f6b-451f-807a-5b66f856daa4-cert\") pod \"openstack-operator-controller-manager-7984bdc97c-5nw99\" (UID: \"3b136221-5f6b-451f-807a-5b66f856daa4\") " pod="openstack-operators/openstack-operator-controller-manager-7984bdc97c-5nw99" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.177712 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt" Oct 08 14:37:29 crc kubenswrapper[4624]: W1008 14:37:29.234979 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e4bdb15_7f2c_4a03_8882_00312974ef50.slice/crio-0df8397b67b8e09206d17a94e5e992a9b2d901a7f85927c69c65af16f3b4734d WatchSource:0}: Error finding container 0df8397b67b8e09206d17a94e5e992a9b2d901a7f85927c69c65af16f3b4734d: Status 404 returned error can't find the container with id 0df8397b67b8e09206d17a94e5e992a9b2d901a7f85927c69c65af16f3b4734d Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.269381 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52jjl\" (UniqueName: \"kubernetes.io/projected/5b5f14f4-2722-46d2-9aa4-958caf004e89-kube-api-access-52jjl\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv\" (UID: \"5b5f14f4-2722-46d2-9aa4-958caf004e89\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.269488 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b136221-5f6b-451f-807a-5b66f856daa4-cert\") pod \"openstack-operator-controller-manager-7984bdc97c-5nw99\" (UID: \"3b136221-5f6b-451f-807a-5b66f856daa4\") " pod="openstack-operators/openstack-operator-controller-manager-7984bdc97c-5nw99" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.269532 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tfsp\" (UniqueName: \"kubernetes.io/projected/3b136221-5f6b-451f-807a-5b66f856daa4-kube-api-access-5tfsp\") pod \"openstack-operator-controller-manager-7984bdc97c-5nw99\" (UID: \"3b136221-5f6b-451f-807a-5b66f856daa4\") " pod="openstack-operators/openstack-operator-controller-manager-7984bdc97c-5nw99" Oct 08 14:37:29 crc kubenswrapper[4624]: E1008 14:37:29.270196 4624 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 08 14:37:29 crc kubenswrapper[4624]: E1008 14:37:29.270235 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b136221-5f6b-451f-807a-5b66f856daa4-cert podName:3b136221-5f6b-451f-807a-5b66f856daa4 nodeName:}" failed. No retries permitted until 2025-10-08 14:37:29.770222277 +0000 UTC m=+874.921157354 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3b136221-5f6b-451f-807a-5b66f856daa4-cert") pod "openstack-operator-controller-manager-7984bdc97c-5nw99" (UID: "3b136221-5f6b-451f-807a-5b66f856daa4") : secret "webhook-server-cert" not found Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.290792 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-74665f6cdc-wklj4" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.291160 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tfsp\" (UniqueName: \"kubernetes.io/projected/3b136221-5f6b-451f-807a-5b66f856daa4-kube-api-access-5tfsp\") pod \"openstack-operator-controller-manager-7984bdc97c-5nw99\" (UID: \"3b136221-5f6b-451f-807a-5b66f856daa4\") " pod="openstack-operators/openstack-operator-controller-manager-7984bdc97c-5nw99" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.291178 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-rdwmx" event={"ID":"7e4bdb15-7f2c-4a03-8882-00312974ef50","Type":"ContainerStarted","Data":"0df8397b67b8e09206d17a94e5e992a9b2d901a7f85927c69c65af16f3b4734d"} Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.318978 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-687df44cdb-ntt7z"] Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.370809 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52jjl\" (UniqueName: \"kubernetes.io/projected/5b5f14f4-2722-46d2-9aa4-958caf004e89-kube-api-access-52jjl\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv\" (UID: \"5b5f14f4-2722-46d2-9aa4-958caf004e89\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.400338 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52jjl\" (UniqueName: \"kubernetes.io/projected/5b5f14f4-2722-46d2-9aa4-958caf004e89-kube-api-access-52jjl\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv\" (UID: \"5b5f14f4-2722-46d2-9aa4-958caf004e89\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.632741 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.680593 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b00e056-2cc0-4eb2-85f9-8fc7197dc67a-cert\") pod \"infra-operator-controller-manager-585fc5b659-bhvnd\" (UID: \"6b00e056-2cc0-4eb2-85f9-8fc7197dc67a\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.693972 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b00e056-2cc0-4eb2-85f9-8fc7197dc67a-cert\") pod \"infra-operator-controller-manager-585fc5b659-bhvnd\" (UID: \"6b00e056-2cc0-4eb2-85f9-8fc7197dc67a\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.768370 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.785136 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b136221-5f6b-451f-807a-5b66f856daa4-cert\") pod \"openstack-operator-controller-manager-7984bdc97c-5nw99\" (UID: \"3b136221-5f6b-451f-807a-5b66f856daa4\") " pod="openstack-operators/openstack-operator-controller-manager-7984bdc97c-5nw99" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.788968 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7bb46cd7d-bxlpf"] Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.794093 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b136221-5f6b-451f-807a-5b66f856daa4-cert\") pod \"openstack-operator-controller-manager-7984bdc97c-5nw99\" (UID: \"3b136221-5f6b-451f-807a-5b66f856daa4\") " pod="openstack-operators/openstack-operator-controller-manager-7984bdc97c-5nw99" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.900594 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5777b4f897-fr9v7"] Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.913127 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7984bdc97c-5nw99" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.917094 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-57bb74c7bf-8zzlr"] Oct 08 14:37:29 crc kubenswrapper[4624]: W1008 14:37:29.922085 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e88daf4_d403_4ba8_827f_a9972c5e40bf.slice/crio-4e32492d0a50b388e9654e0d30c253bfa7771e9d4d9b95283624f113a524b268 WatchSource:0}: Error finding container 4e32492d0a50b388e9654e0d30c253bfa7771e9d4d9b95283624f113a524b268: Status 404 returned error can't find the container with id 4e32492d0a50b388e9654e0d30c253bfa7771e9d4d9b95283624f113a524b268 Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.937072 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-6d9967f8dd-zkxcm"] Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.966569 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-74cb5cbc49-fnwtr"] Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.990129 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4fff91a-a1f8-4e66-9955-005bfa78dfe6-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd\" (UID: \"f4fff91a-a1f8-4e66-9955-005bfa78dfe6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd" Oct 08 14:37:29 crc kubenswrapper[4624]: I1008 14:37:29.998328 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4fff91a-a1f8-4e66-9955-005bfa78dfe6-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd\" (UID: \"f4fff91a-a1f8-4e66-9955-005bfa78dfe6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd" Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.181968 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd" Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.325578 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-fr9v7" event={"ID":"5e88daf4-d403-4ba8-827f-a9972c5e40bf","Type":"ContainerStarted","Data":"4e32492d0a50b388e9654e0d30c253bfa7771e9d4d9b95283624f113a524b268"} Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.352807 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-bxlpf" event={"ID":"d14acd1f-d497-4c71-8e2a-24c991118c01","Type":"ContainerStarted","Data":"5aab36fe119e1a469b7bca50ffbbb2dcb9a5164094a6407ab0367470481d77d0"} Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.359107 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-zkxcm" event={"ID":"0a8ab8f3-13b9-4f95-b540-ea49d2c5a261","Type":"ContainerStarted","Data":"d0c3bd57f7ba833e2308ba8109debf669708b456fc39d7127dbffb58d5cb88ef"} Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.367443 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-8zzlr" event={"ID":"84583328-8cef-49aa-b812-6f550d1dd71f","Type":"ContainerStarted","Data":"11408b01f40c1e609a56a14bba5fa19e39d4d5131d305255ab3e4ba1ec8a3c4d"} Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.370001 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f96f8c84-s6lmj"] Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.372618 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ntt7z" event={"ID":"c1f040ec-db8e-42de-8d6c-7758f2f45ecc","Type":"ContainerStarted","Data":"5be361e00c99a13b105f421db88f7c693408fd103e0a0a94f78ab8fc0ab694ec"} Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.391893 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-fnwtr" event={"ID":"d957903e-a551-41a6-8360-9af30306414f","Type":"ContainerStarted","Data":"27b016f87ade5344e4c6afbc2395dc3c6e4cdfbe6229c629faede89abf100d11"} Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.425826 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d74794d9b-5nntt"] Oct 08 14:37:30 crc kubenswrapper[4624]: W1008 14:37:30.430403 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dbf7483_c352_4fd8_b0e0_96acf41616b0.slice/crio-d06f599c61eb3f241ec658b47746eca3d410ae5087e75f9d06f3244856f00a84 WatchSource:0}: Error finding container d06f599c61eb3f241ec658b47746eca3d410ae5087e75f9d06f3244856f00a84: Status 404 returned error can't find the container with id d06f599c61eb3f241ec658b47746eca3d410ae5087e75f9d06f3244856f00a84 Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.437914 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vzbcj"] Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.465561 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-797d478b46-tg58g"] Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.469142 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-hm59s"] Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.482011 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-775776c574-m7brx"] Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.492186 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-664664cb68-5bwth"] Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.495310 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-74665f6cdc-wklj4"] Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.515572 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-59578bc799-q7qf9"] Oct 08 14:37:30 crc kubenswrapper[4624]: W1008 14:37:30.538530 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde7ff9ef_39f5_4521_856d_28c2665e7893.slice/crio-f55fba5061bc08f65e1bac7fbb60eca0132e69f1954a6c70e3c17e3f0a4319e1 WatchSource:0}: Error finding container f55fba5061bc08f65e1bac7fbb60eca0132e69f1954a6c70e3c17e3f0a4319e1: Status 404 returned error can't find the container with id f55fba5061bc08f65e1bac7fbb60eca0132e69f1954a6c70e3c17e3f0a4319e1 Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.543311 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-ddb98f99b-zljbd"] Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.555117 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv"] Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.559196 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt"] Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.572934 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64f84fcdbb-qncx7"] Oct 08 14:37:30 crc kubenswrapper[4624]: E1008 14:37:30.575873 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:e4ae07e859166fc5e2cb4f8e0e2c3358b9d2e2d6721a3864d2e0c651d36698ca,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pp6n7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5dd4499c96-l4cbt_openstack-operators(60b85252-6e34-43c5-a048-52fe105f2f93): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 08 14:37:30 crc kubenswrapper[4624]: E1008 14:37:30.576012 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-52jjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv_openstack-operators(5b5f14f4-2722-46d2-9aa4-958caf004e89): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 08 14:37:30 crc kubenswrapper[4624]: E1008 14:37:30.576161 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:582f7b1e411961b69f2e3c6b346aa25759b89f7720ed3fade1d363bf5d2dffc8,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tfdxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-59578bc799-q7qf9_openstack-operators(c23e992e-ee8f-4d3a-9a3f-9f35a77f60fe): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 08 14:37:30 crc kubenswrapper[4624]: E1008 14:37:30.581301 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv" podUID="5b5f14f4-2722-46d2-9aa4-958caf004e89" Oct 08 14:37:30 crc kubenswrapper[4624]: E1008 14:37:30.591358 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:79b43a69884631c635d2164b95a2d4ec68f5cb33f96da14764f1c710880f3997,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6xxx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-ddb98f99b-zljbd_openstack-operators(afba2503-8832-4a9f-8246-390f7ae79b71): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 08 14:37:30 crc kubenswrapper[4624]: W1008 14:37:30.605093 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafff9e1e_6c7c_42b8_8099_6817f813ddb5.slice/crio-bd24664fcfc56f500f3cb3d14598df4e52af4363308e1423c98917ebf7682084 WatchSource:0}: Error finding container bd24664fcfc56f500f3cb3d14598df4e52af4363308e1423c98917ebf7682084: Status 404 returned error can't find the container with id bd24664fcfc56f500f3cb3d14598df4e52af4363308e1423c98917ebf7682084 Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.736667 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd"] Oct 08 14:37:30 crc kubenswrapper[4624]: W1008 14:37:30.751669 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b00e056_2cc0_4eb2_85f9_8fc7197dc67a.slice/crio-0a8449e4d841323a90eab510065ddc940cacb676fec41a69637ae7af5bc3041c WatchSource:0}: Error finding container 0a8449e4d841323a90eab510065ddc940cacb676fec41a69637ae7af5bc3041c: Status 404 returned error can't find the container with id 0a8449e4d841323a90eab510065ddc940cacb676fec41a69637ae7af5bc3041c Oct 08 14:37:30 crc kubenswrapper[4624]: E1008 14:37:30.758662 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plp7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-585fc5b659-bhvnd_openstack-operators(6b00e056-2cc0-4eb2-85f9-8fc7197dc67a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.783483 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7984bdc97c-5nw99"] Oct 08 14:37:30 crc kubenswrapper[4624]: I1008 14:37:30.839871 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd"] Oct 08 14:37:30 crc kubenswrapper[4624]: W1008 14:37:30.850396 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b136221_5f6b_451f_807a_5b66f856daa4.slice/crio-6d1fce1d0aef1f80a81c664ebfbfd71064af6c01b69db31ea562d5513236c90a WatchSource:0}: Error finding container 6d1fce1d0aef1f80a81c664ebfbfd71064af6c01b69db31ea562d5513236c90a: Status 404 returned error can't find the container with id 6d1fce1d0aef1f80a81c664ebfbfd71064af6c01b69db31ea562d5513236c90a Oct 08 14:37:30 crc kubenswrapper[4624]: W1008 14:37:30.877832 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4fff91a_a1f8_4e66_9955_005bfa78dfe6.slice/crio-ae7babfe27611cab05a5a8588e9c9bc3f1d296bc56ba78d0853efaaa503ede69 WatchSource:0}: Error finding container ae7babfe27611cab05a5a8588e9c9bc3f1d296bc56ba78d0853efaaa503ede69: Status 404 returned error can't find the container with id ae7babfe27611cab05a5a8588e9c9bc3f1d296bc56ba78d0853efaaa503ede69 Oct 08 14:37:30 crc kubenswrapper[4624]: E1008 14:37:30.961434 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt" podUID="60b85252-6e34-43c5-a048-52fe105f2f93" Oct 08 14:37:30 crc kubenswrapper[4624]: E1008 14:37:30.962249 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-zljbd" podUID="afba2503-8832-4a9f-8246-390f7ae79b71" Oct 08 14:37:30 crc kubenswrapper[4624]: E1008 14:37:30.979771 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-59578bc799-q7qf9" podUID="c23e992e-ee8f-4d3a-9a3f-9f35a77f60fe" Oct 08 14:37:31 crc kubenswrapper[4624]: E1008 14:37:31.021258 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" podUID="6b00e056-2cc0-4eb2-85f9-8fc7197dc67a" Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.417370 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7984bdc97c-5nw99" event={"ID":"3b136221-5f6b-451f-807a-5b66f856daa4","Type":"ContainerStarted","Data":"c770c0ba811bb1f2488316309c838abe5739d50fa38194e6c51862aafb6e544f"} Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.417712 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7984bdc97c-5nw99" event={"ID":"3b136221-5f6b-451f-807a-5b66f856daa4","Type":"ContainerStarted","Data":"6d1fce1d0aef1f80a81c664ebfbfd71064af6c01b69db31ea562d5513236c90a"} Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.420411 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-tg58g" event={"ID":"25e6b130-d820-475c-aae6-bed0dfbd0d0f","Type":"ContainerStarted","Data":"6e4c558af048015cd7513a2601ea0f9e502501862f07c31cee09aedf91848607"} Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.436628 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-664664cb68-5bwth" event={"ID":"2e68c8fe-365a-4d66-bbbd-0cac98993f72","Type":"ContainerStarted","Data":"e9764f02ba449255307c8b6c093e90b6ee12008e26ca785ddb88b016d173d6e1"} Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.446735 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-74665f6cdc-wklj4" event={"ID":"de7ff9ef-39f5-4521-856d-28c2665e7893","Type":"ContainerStarted","Data":"f55fba5061bc08f65e1bac7fbb60eca0132e69f1954a6c70e3c17e3f0a4319e1"} Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.450006 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt" event={"ID":"60b85252-6e34-43c5-a048-52fe105f2f93","Type":"ContainerStarted","Data":"9d28a9b3824e6a1e83d1fb07522ed8e147b66ee2c327c3ffedcdcbc2c3a00fca"} Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.450058 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt" event={"ID":"60b85252-6e34-43c5-a048-52fe105f2f93","Type":"ContainerStarted","Data":"fb945f69dfbc035dc56055f79a8556ef18569a59cc19534397520e06ad56e604"} Oct 08 14:37:31 crc kubenswrapper[4624]: E1008 14:37:31.461110 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:e4ae07e859166fc5e2cb4f8e0e2c3358b9d2e2d6721a3864d2e0c651d36698ca\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt" podUID="60b85252-6e34-43c5-a048-52fe105f2f93" Oct 08 14:37:31 crc kubenswrapper[4624]: E1008 14:37:31.476021 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv" podUID="5b5f14f4-2722-46d2-9aa4-958caf004e89" Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.491409 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv" event={"ID":"5b5f14f4-2722-46d2-9aa4-958caf004e89","Type":"ContainerStarted","Data":"870ce1fc40d2fffecfaab9bca0f94a3122e79aae273fd1311ad6aeb1b1f6c38d"} Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.491469 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-775776c574-m7brx" event={"ID":"2f4ed0ab-4cb6-4874-8c0d-cce4fdbfa7c9","Type":"ContainerStarted","Data":"b1bab8b235ae213c5ad319a4aa64ec616ab59d9ea5d82d51572d0dfdbf290c7d"} Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.491484 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-qncx7" event={"ID":"afff9e1e-6c7c-42b8-8099-6817f813ddb5","Type":"ContainerStarted","Data":"bd24664fcfc56f500f3cb3d14598df4e52af4363308e1423c98917ebf7682084"} Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.501799 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-hm59s" event={"ID":"46bfb8aa-1ae5-43b4-88e9-5175655832aa","Type":"ContainerStarted","Data":"36e50f1161beb1332229345da30c83603b9455160c8e7350cbfde100c6209d7f"} Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.503850 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vzbcj" event={"ID":"4fed85c8-e5c1-40db-9799-64b1705f9d86","Type":"ContainerStarted","Data":"456c07b548ebbc6c92b79f886c79917f06cc81055089c997d934e5cda62fd41f"} Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.507098 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd" event={"ID":"f4fff91a-a1f8-4e66-9955-005bfa78dfe6","Type":"ContainerStarted","Data":"ae7babfe27611cab05a5a8588e9c9bc3f1d296bc56ba78d0853efaaa503ede69"} Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.507984 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f96f8c84-s6lmj" event={"ID":"657a98dc-f421-4483-9354-28eeb59bb8a0","Type":"ContainerStarted","Data":"a11f811618a56f32e3702edff04883215464d4d4ed6d6e98ff95e3620fa7fcb6"} Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.508710 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-5nntt" event={"ID":"9dbf7483-c352-4fd8-b0e0-96acf41616b0","Type":"ContainerStarted","Data":"d06f599c61eb3f241ec658b47746eca3d410ae5087e75f9d06f3244856f00a84"} Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.511452 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-59578bc799-q7qf9" event={"ID":"c23e992e-ee8f-4d3a-9a3f-9f35a77f60fe","Type":"ContainerStarted","Data":"6d17dba990f42c75b0f77363fd05d4c4100520a973721a62b14852e16d3eff06"} Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.511478 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-59578bc799-q7qf9" event={"ID":"c23e992e-ee8f-4d3a-9a3f-9f35a77f60fe","Type":"ContainerStarted","Data":"1ac63111727cbd1c12ef3fdf8011a9c298c767a9b6770e4e289301225a2bac23"} Oct 08 14:37:31 crc kubenswrapper[4624]: E1008 14:37:31.513138 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:582f7b1e411961b69f2e3c6b346aa25759b89f7720ed3fade1d363bf5d2dffc8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-59578bc799-q7qf9" podUID="c23e992e-ee8f-4d3a-9a3f-9f35a77f60fe" Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.515752 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-zljbd" event={"ID":"afba2503-8832-4a9f-8246-390f7ae79b71","Type":"ContainerStarted","Data":"4bb8cf11a65fc226a8368f4709833c1cdd961c31c710f2e6e5fcd08326e1d6bc"} Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.515780 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-zljbd" event={"ID":"afba2503-8832-4a9f-8246-390f7ae79b71","Type":"ContainerStarted","Data":"24299e2996f201d108d34dad56e799990cd06d876d856bf7c22509b07e683f6f"} Oct 08 14:37:31 crc kubenswrapper[4624]: E1008 14:37:31.517587 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:79b43a69884631c635d2164b95a2d4ec68f5cb33f96da14764f1c710880f3997\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-zljbd" podUID="afba2503-8832-4a9f-8246-390f7ae79b71" Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.547022 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" event={"ID":"6b00e056-2cc0-4eb2-85f9-8fc7197dc67a","Type":"ContainerStarted","Data":"25ac264f3ed82771b2618fb13c811ac98df10561a312bbb54f37b17afdefac2f"} Oct 08 14:37:31 crc kubenswrapper[4624]: I1008 14:37:31.547075 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" event={"ID":"6b00e056-2cc0-4eb2-85f9-8fc7197dc67a","Type":"ContainerStarted","Data":"0a8449e4d841323a90eab510065ddc940cacb676fec41a69637ae7af5bc3041c"} Oct 08 14:37:31 crc kubenswrapper[4624]: E1008 14:37:31.561148 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492\\\"\"" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" podUID="6b00e056-2cc0-4eb2-85f9-8fc7197dc67a" Oct 08 14:37:32 crc kubenswrapper[4624]: I1008 14:37:32.566122 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7984bdc97c-5nw99" event={"ID":"3b136221-5f6b-451f-807a-5b66f856daa4","Type":"ContainerStarted","Data":"1617bee9d52c21027dc1feb10f72545eb4811892324cddb0c6598adacc1d4df7"} Oct 08 14:37:32 crc kubenswrapper[4624]: I1008 14:37:32.566819 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7984bdc97c-5nw99" Oct 08 14:37:32 crc kubenswrapper[4624]: E1008 14:37:32.569129 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:582f7b1e411961b69f2e3c6b346aa25759b89f7720ed3fade1d363bf5d2dffc8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-59578bc799-q7qf9" podUID="c23e992e-ee8f-4d3a-9a3f-9f35a77f60fe" Oct 08 14:37:32 crc kubenswrapper[4624]: E1008 14:37:32.569533 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:79b43a69884631c635d2164b95a2d4ec68f5cb33f96da14764f1c710880f3997\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-zljbd" podUID="afba2503-8832-4a9f-8246-390f7ae79b71" Oct 08 14:37:32 crc kubenswrapper[4624]: E1008 14:37:32.569589 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv" podUID="5b5f14f4-2722-46d2-9aa4-958caf004e89" Oct 08 14:37:32 crc kubenswrapper[4624]: E1008 14:37:32.569657 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:e4ae07e859166fc5e2cb4f8e0e2c3358b9d2e2d6721a3864d2e0c651d36698ca\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt" podUID="60b85252-6e34-43c5-a048-52fe105f2f93" Oct 08 14:37:32 crc kubenswrapper[4624]: E1008 14:37:32.569705 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492\\\"\"" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" podUID="6b00e056-2cc0-4eb2-85f9-8fc7197dc67a" Oct 08 14:37:32 crc kubenswrapper[4624]: I1008 14:37:32.672781 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7984bdc97c-5nw99" podStartSLOduration=4.672755763 podStartE2EDuration="4.672755763s" podCreationTimestamp="2025-10-08 14:37:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:37:32.665813966 +0000 UTC m=+877.816749043" watchObservedRunningTime="2025-10-08 14:37:32.672755763 +0000 UTC m=+877.823690840" Oct 08 14:37:39 crc kubenswrapper[4624]: I1008 14:37:39.921023 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7984bdc97c-5nw99" Oct 08 14:37:43 crc kubenswrapper[4624]: E1008 14:37:43.616574 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:b2e9acf568a48c28cf2aed6012e432eeeb7d5f0eb11878fc91b62bc34cba10cd" Oct 08 14:37:43 crc kubenswrapper[4624]: E1008 14:37:43.617215 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:b2e9acf568a48c28cf2aed6012e432eeeb7d5f0eb11878fc91b62bc34cba10cd,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bvjtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-57bb74c7bf-8zzlr_openstack-operators(84583328-8cef-49aa-b812-6f550d1dd71f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:37:44 crc kubenswrapper[4624]: E1008 14:37:44.952412 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e" Oct 08 14:37:44 crc kubenswrapper[4624]: E1008 14:37:44.952840 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gvhjh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f4d5dfdc6-vzbcj_openstack-operators(4fed85c8-e5c1-40db-9799-64b1705f9d86): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:37:45 crc kubenswrapper[4624]: E1008 14:37:45.453449 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:47278ed28e02df00892f941763aa0d69547327318e8a983e07f4577acd288167" Oct 08 14:37:45 crc kubenswrapper[4624]: E1008 14:37:45.453611 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:47278ed28e02df00892f941763aa0d69547327318e8a983e07f4577acd288167,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cfwvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-5777b4f897-fr9v7_openstack-operators(5e88daf4-d403-4ba8-827f-a9972c5e40bf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:37:46 crc kubenswrapper[4624]: E1008 14:37:46.118842 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:33652e75a03a058769019fe8d8c51585a6eeefef5e1ecb96f9965434117954f2" Oct 08 14:37:46 crc kubenswrapper[4624]: E1008 14:37:46.118987 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:33652e75a03a058769019fe8d8c51585a6eeefef5e1ecb96f9965434117954f2,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7g7zp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-797d478b46-tg58g_openstack-operators(25e6b130-d820-475c-aae6-bed0dfbd0d0f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:37:46 crc kubenswrapper[4624]: E1008 14:37:46.564552 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:ec11cb8711bd1af22db3c84aa854349ee46191add3db45aecfabb1d8410c04d0" Oct 08 14:37:46 crc kubenswrapper[4624]: E1008 14:37:46.565066 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:ec11cb8711bd1af22db3c84aa854349ee46191add3db45aecfabb1d8410c04d0,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85mrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-6d9967f8dd-zkxcm_openstack-operators(0a8ab8f3-13b9-4f95-b540-ea49d2c5a261): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:37:55 crc kubenswrapper[4624]: E1008 14:37:55.577137 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:efa8fb78cffb573d299ffcc7bab1099affd2dbbab222152092b313074306e0a9" Oct 08 14:37:55 crc kubenswrapper[4624]: E1008 14:37:55.577960 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:efa8fb78cffb573d299ffcc7bab1099affd2dbbab222152092b313074306e0a9,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rlft4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-74665f6cdc-wklj4_openstack-operators(de7ff9ef-39f5-4521-856d-28c2665e7893): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:37:55 crc kubenswrapper[4624]: E1008 14:37:55.611775 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:551b59e107c9812f7ad7aa06577376b0dcb58ff9498a41d5d5273e60e20ba7e4" Oct 08 14:37:55 crc kubenswrapper[4624]: E1008 14:37:55.612056 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:551b59e107c9812f7ad7aa06577376b0dcb58ff9498a41d5d5273e60e20ba7e4,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tvdvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f96f8c84-s6lmj_openstack-operators(657a98dc-f421-4483-9354-28eeb59bb8a0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:37:56 crc kubenswrapper[4624]: E1008 14:37:56.204305 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:e4ae07e859166fc5e2cb4f8e0e2c3358b9d2e2d6721a3864d2e0c651d36698ca" Oct 08 14:37:56 crc kubenswrapper[4624]: E1008 14:37:56.204788 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:e4ae07e859166fc5e2cb4f8e0e2c3358b9d2e2d6721a3864d2e0c651d36698ca,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pp6n7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5dd4499c96-l4cbt_openstack-operators(60b85252-6e34-43c5-a048-52fe105f2f93): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:37:56 crc kubenswrapper[4624]: E1008 14:37:56.206081 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt" podUID="60b85252-6e34-43c5-a048-52fe105f2f93" Oct 08 14:37:56 crc kubenswrapper[4624]: E1008 14:37:56.347031 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vzbcj" podUID="4fed85c8-e5c1-40db-9799-64b1705f9d86" Oct 08 14:37:56 crc kubenswrapper[4624]: E1008 14:37:56.361219 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-fr9v7" podUID="5e88daf4-d403-4ba8-827f-a9972c5e40bf" Oct 08 14:37:56 crc kubenswrapper[4624]: I1008 14:37:56.725401 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-fr9v7" event={"ID":"5e88daf4-d403-4ba8-827f-a9972c5e40bf","Type":"ContainerStarted","Data":"f187b2daeef90716b2456ce296461c70a3fb5dafb58cc87f97713e32c4ed51c5"} Oct 08 14:37:56 crc kubenswrapper[4624]: I1008 14:37:56.726579 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vzbcj" event={"ID":"4fed85c8-e5c1-40db-9799-64b1705f9d86","Type":"ContainerStarted","Data":"5ae6438b6600970ba7cc08ed1661da35997b5fcf9a74134c56526a1e52f86f59"} Oct 08 14:37:57 crc kubenswrapper[4624]: E1008 14:37:57.144948 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-8zzlr" podUID="84583328-8cef-49aa-b812-6f550d1dd71f" Oct 08 14:37:57 crc kubenswrapper[4624]: I1008 14:37:57.735653 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-8zzlr" event={"ID":"84583328-8cef-49aa-b812-6f550d1dd71f","Type":"ContainerStarted","Data":"d70e68cafc53b80fd0ed84ed57f5c12cfbd8b712cdb6e0bcfcd3aa3a9a0ae096"} Oct 08 14:37:58 crc kubenswrapper[4624]: E1008 14:37:58.985439 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-tg58g" podUID="25e6b130-d820-475c-aae6-bed0dfbd0d0f" Oct 08 14:37:59 crc kubenswrapper[4624]: E1008 14:37:59.197093 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-zkxcm" podUID="0a8ab8f3-13b9-4f95-b540-ea49d2c5a261" Oct 08 14:37:59 crc kubenswrapper[4624]: E1008 14:37:59.346700 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-74665f6cdc-wklj4" podUID="de7ff9ef-39f5-4521-856d-28c2665e7893" Oct 08 14:37:59 crc kubenswrapper[4624]: E1008 14:37:59.503846 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f96f8c84-s6lmj" podUID="657a98dc-f421-4483-9354-28eeb59bb8a0" Oct 08 14:37:59 crc kubenswrapper[4624]: I1008 14:37:59.770167 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-775776c574-m7brx" event={"ID":"2f4ed0ab-4cb6-4874-8c0d-cce4fdbfa7c9","Type":"ContainerStarted","Data":"c1016564b3364d45473b551a3ba9a11322780f9cf337009b3ecbb6602ffd5273"} Oct 08 14:37:59 crc kubenswrapper[4624]: I1008 14:37:59.771510 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-fnwtr" event={"ID":"d957903e-a551-41a6-8360-9af30306414f","Type":"ContainerStarted","Data":"de0baeee80d0e2782a9f0848408ad53a90ac2bb84a5f221b439bdd2814b8394d"} Oct 08 14:37:59 crc kubenswrapper[4624]: I1008 14:37:59.774076 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-74665f6cdc-wklj4" event={"ID":"de7ff9ef-39f5-4521-856d-28c2665e7893","Type":"ContainerStarted","Data":"45669aff174aedf5277b93987cb721de96daa74650f2974729c2c70666732c4b"} Oct 08 14:37:59 crc kubenswrapper[4624]: E1008 14:37:59.778134 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:efa8fb78cffb573d299ffcc7bab1099affd2dbbab222152092b313074306e0a9\\\"\"" pod="openstack-operators/test-operator-controller-manager-74665f6cdc-wklj4" podUID="de7ff9ef-39f5-4521-856d-28c2665e7893" Oct 08 14:37:59 crc kubenswrapper[4624]: I1008 14:37:59.780348 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-664664cb68-5bwth" event={"ID":"2e68c8fe-365a-4d66-bbbd-0cac98993f72","Type":"ContainerStarted","Data":"f4aa81760c9b52642c459e3848d6b393ee0998e6b266d4f776d900053da1f649"} Oct 08 14:37:59 crc kubenswrapper[4624]: I1008 14:37:59.786315 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd" event={"ID":"f4fff91a-a1f8-4e66-9955-005bfa78dfe6","Type":"ContainerStarted","Data":"c024b70959e12a9a38d7558cebc55abd78793ea9dca44f72d831a230ef63701f"} Oct 08 14:37:59 crc kubenswrapper[4624]: I1008 14:37:59.790950 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-5nntt" event={"ID":"9dbf7483-c352-4fd8-b0e0-96acf41616b0","Type":"ContainerStarted","Data":"5334a35739d069320907f2f6806c18396f7fbc95a248b4df9c91d5f8906681e9"} Oct 08 14:37:59 crc kubenswrapper[4624]: I1008 14:37:59.791955 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f96f8c84-s6lmj" event={"ID":"657a98dc-f421-4483-9354-28eeb59bb8a0","Type":"ContainerStarted","Data":"319ebfba0889e53daf04e43d6815a7b235ac4b7e449aa81150bcdcf1e9f59123"} Oct 08 14:37:59 crc kubenswrapper[4624]: I1008 14:37:59.800002 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-tg58g" event={"ID":"25e6b130-d820-475c-aae6-bed0dfbd0d0f","Type":"ContainerStarted","Data":"b6746feacf5979a3886ce5baf8a17c0004378103fcc21b6034cfddb33240becb"} Oct 08 14:37:59 crc kubenswrapper[4624]: E1008 14:37:59.804380 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:551b59e107c9812f7ad7aa06577376b0dcb58ff9498a41d5d5273e60e20ba7e4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f96f8c84-s6lmj" podUID="657a98dc-f421-4483-9354-28eeb59bb8a0" Oct 08 14:37:59 crc kubenswrapper[4624]: I1008 14:37:59.807449 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-zkxcm" event={"ID":"0a8ab8f3-13b9-4f95-b540-ea49d2c5a261","Type":"ContainerStarted","Data":"55a7530798fdaa6c052829f098a12e2b80394302c933b58d0350203c6935d0ef"} Oct 08 14:37:59 crc kubenswrapper[4624]: I1008 14:37:59.849129 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ntt7z" event={"ID":"c1f040ec-db8e-42de-8d6c-7758f2f45ecc","Type":"ContainerStarted","Data":"b4af70ef26f5b8db01f47c1af74e786d9e3020d764b52b7bc74be7250a851ec9"} Oct 08 14:38:00 crc kubenswrapper[4624]: I1008 14:38:00.857014 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-qncx7" event={"ID":"afff9e1e-6c7c-42b8-8099-6817f813ddb5","Type":"ContainerStarted","Data":"a8e54e10a81ad6232f4d4baa25cab1ee39f5670b02879a8a3dbcd391d6e91338"} Oct 08 14:38:00 crc kubenswrapper[4624]: I1008 14:38:00.858607 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-hm59s" event={"ID":"46bfb8aa-1ae5-43b4-88e9-5175655832aa","Type":"ContainerStarted","Data":"cd6b67c0a233d4c91034baf398131c6716ad5ff7807d3e56e1eeda3b1e31aed3"} Oct 08 14:38:00 crc kubenswrapper[4624]: I1008 14:38:00.859654 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-rdwmx" event={"ID":"7e4bdb15-7f2c-4a03-8882-00312974ef50","Type":"ContainerStarted","Data":"ad2dbd331cff2e2df5aa07bd433959ef3a272961bdbdc8b262ae74179b2dd60c"} Oct 08 14:38:00 crc kubenswrapper[4624]: I1008 14:38:00.861197 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vzbcj" event={"ID":"4fed85c8-e5c1-40db-9799-64b1705f9d86","Type":"ContainerStarted","Data":"afe25afde80ba4858b09ac2067d481c42b4984a1d2d8e340b1341141c3c51f53"} Oct 08 14:38:00 crc kubenswrapper[4624]: I1008 14:38:00.862414 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-bxlpf" event={"ID":"d14acd1f-d497-4c71-8e2a-24c991118c01","Type":"ContainerStarted","Data":"c8d9910bc9611097bbfe4e16bc0fc7869f93dec23a689782031a13105d79a7a3"} Oct 08 14:38:00 crc kubenswrapper[4624]: I1008 14:38:00.864177 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ntt7z" event={"ID":"c1f040ec-db8e-42de-8d6c-7758f2f45ecc","Type":"ContainerStarted","Data":"81b578f835243465048d4d639a0f5e81d3b01a1d4b2152c0b85548230cb8888e"} Oct 08 14:38:00 crc kubenswrapper[4624]: I1008 14:38:00.865478 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv" event={"ID":"5b5f14f4-2722-46d2-9aa4-958caf004e89","Type":"ContainerStarted","Data":"c3d2a0a8cde2cf57dcf8589a6dbe08283f199c898ce19a669e9677b688f6d493"} Oct 08 14:38:00 crc kubenswrapper[4624]: I1008 14:38:00.867036 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-59578bc799-q7qf9" event={"ID":"c23e992e-ee8f-4d3a-9a3f-9f35a77f60fe","Type":"ContainerStarted","Data":"9a46e5fbfb23c1e2dfa6d788eed2e02e0e4b48cc0ffeecd7f0d8eb26bb3c5e02"} Oct 08 14:38:00 crc kubenswrapper[4624]: I1008 14:38:00.867521 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-59578bc799-q7qf9" Oct 08 14:38:00 crc kubenswrapper[4624]: E1008 14:38:00.868290 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:efa8fb78cffb573d299ffcc7bab1099affd2dbbab222152092b313074306e0a9\\\"\"" pod="openstack-operators/test-operator-controller-manager-74665f6cdc-wklj4" podUID="de7ff9ef-39f5-4521-856d-28c2665e7893" Oct 08 14:38:00 crc kubenswrapper[4624]: E1008 14:38:00.868605 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:551b59e107c9812f7ad7aa06577376b0dcb58ff9498a41d5d5273e60e20ba7e4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f96f8c84-s6lmj" podUID="657a98dc-f421-4483-9354-28eeb59bb8a0" Oct 08 14:38:00 crc kubenswrapper[4624]: I1008 14:38:00.913272 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-59578bc799-q7qf9" podStartSLOduration=5.539581303 podStartE2EDuration="33.913253502s" podCreationTimestamp="2025-10-08 14:37:27 +0000 UTC" firstStartedPulling="2025-10-08 14:37:30.572383249 +0000 UTC m=+875.723318326" lastFinishedPulling="2025-10-08 14:37:58.946055448 +0000 UTC m=+904.096990525" observedRunningTime="2025-10-08 14:38:00.905842192 +0000 UTC m=+906.056777269" watchObservedRunningTime="2025-10-08 14:38:00.913253502 +0000 UTC m=+906.064188569" Oct 08 14:38:01 crc kubenswrapper[4624]: I1008 14:38:01.874696 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-fr9v7" event={"ID":"5e88daf4-d403-4ba8-827f-a9972c5e40bf","Type":"ContainerStarted","Data":"24ed135ab19244da37c98d781526a205d4ae6a3f2e752b766ed3798a3ef5ec7f"} Oct 08 14:38:03 crc kubenswrapper[4624]: I1008 14:38:03.890402 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-664664cb68-5bwth" event={"ID":"2e68c8fe-365a-4d66-bbbd-0cac98993f72","Type":"ContainerStarted","Data":"a47e763d4adf40b83a00812983536576c5c5884adf13dfb6f0b9dec5f9f13362"} Oct 08 14:38:03 crc kubenswrapper[4624]: I1008 14:38:03.892615 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd" event={"ID":"f4fff91a-a1f8-4e66-9955-005bfa78dfe6","Type":"ContainerStarted","Data":"a39fd91303b638eb52eef18734b031b69543defa5fe49c69701b752b2c497f6c"} Oct 08 14:38:03 crc kubenswrapper[4624]: I1008 14:38:03.895602 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-5nntt" event={"ID":"9dbf7483-c352-4fd8-b0e0-96acf41616b0","Type":"ContainerStarted","Data":"1ee842bb8d9bcb5e52248bf033cfb6d49de6cc62de4f50e42d09d8b27dbeec84"} Oct 08 14:38:03 crc kubenswrapper[4624]: I1008 14:38:03.897325 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-qncx7" event={"ID":"afff9e1e-6c7c-42b8-8099-6817f813ddb5","Type":"ContainerStarted","Data":"bb0591051f139c9ac616dfd4069dd7cbe43a348e0b20b3610daa906e377e7e41"} Oct 08 14:38:03 crc kubenswrapper[4624]: I1008 14:38:03.901315 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-8zzlr" event={"ID":"84583328-8cef-49aa-b812-6f550d1dd71f","Type":"ContainerStarted","Data":"70366e7f522ab617267cbeaff4bc51a7a74b7a44d07ff3e087f66e1927d9c69e"} Oct 08 14:38:03 crc kubenswrapper[4624]: I1008 14:38:03.903603 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-zljbd" event={"ID":"afba2503-8832-4a9f-8246-390f7ae79b71","Type":"ContainerStarted","Data":"0887be4a676b240a73b72f09131e916a6a9e4bbcfb80ac53c8f977ef51cb9345"} Oct 08 14:38:03 crc kubenswrapper[4624]: I1008 14:38:03.905437 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" event={"ID":"6b00e056-2cc0-4eb2-85f9-8fc7197dc67a","Type":"ContainerStarted","Data":"edeff6b997d4965d2555e36bfd4fe558485491a02a610a731ff1a63c03e60081"} Oct 08 14:38:03 crc kubenswrapper[4624]: I1008 14:38:03.907136 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-fnwtr" event={"ID":"d957903e-a551-41a6-8360-9af30306414f","Type":"ContainerStarted","Data":"ce90e6ca7a957c40620200da6b25b4724ff9ccf8a9a92863a511c72d9379b58b"} Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.915884 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-rdwmx" event={"ID":"7e4bdb15-7f2c-4a03-8882-00312974ef50","Type":"ContainerStarted","Data":"58b20ee285f322915b4aec104a7fe6fd9261934c3b53abe9a75e069d44032522"} Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.916351 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-rdwmx" Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.918054 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-bxlpf" event={"ID":"d14acd1f-d497-4c71-8e2a-24c991118c01","Type":"ContainerStarted","Data":"cba21b020ef1c6056168ca03695e31e4787a1df9430f2b2dfaaf446f8ef79118"} Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.918191 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-bxlpf" Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.920221 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-zkxcm" event={"ID":"0a8ab8f3-13b9-4f95-b540-ea49d2c5a261","Type":"ContainerStarted","Data":"48db9a5bf4fd32543f05aacca16c3ab6e7a057e11c59542dcf8cc4385ea1f043"} Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.920580 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-zkxcm" Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.924899 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-775776c574-m7brx" event={"ID":"2f4ed0ab-4cb6-4874-8c0d-cce4fdbfa7c9","Type":"ContainerStarted","Data":"5173e6c0b9a40898acebd27d6acd0cac1b64b8238a3c473576b90954ce101d1d"} Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.925778 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-775776c574-m7brx" Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.928693 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-775776c574-m7brx" Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.929555 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-tg58g" event={"ID":"25e6b130-d820-475c-aae6-bed0dfbd0d0f","Type":"ContainerStarted","Data":"76359daa1f50209a34ac9bd42dddde3620e51f8d5c220776a27442c99b1fd816"} Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.929671 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-tg58g" Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.931919 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-hm59s" event={"ID":"46bfb8aa-1ae5-43b4-88e9-5175655832aa","Type":"ContainerStarted","Data":"461c7e6a9bab5bf7bbac5193eeec3c6c3edf3ddc605bc6279d1d975bf3cafb94"} Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.932249 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-zljbd" Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.932511 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-8zzlr" Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.932617 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.933952 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-5nntt" Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.933982 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-qncx7" Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.933998 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-664664cb68-5bwth" Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.934010 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-hm59s" Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.937294 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-664664cb68-5bwth" Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.938066 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-qncx7" Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.938424 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-5nntt" Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.942028 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-rdwmx" podStartSLOduration=9.6379477 podStartE2EDuration="37.942005441s" podCreationTimestamp="2025-10-08 14:37:27 +0000 UTC" firstStartedPulling="2025-10-08 14:37:29.237960804 +0000 UTC m=+874.388895881" lastFinishedPulling="2025-10-08 14:37:57.542018545 +0000 UTC m=+902.692953622" observedRunningTime="2025-10-08 14:38:04.936396269 +0000 UTC m=+910.087331346" watchObservedRunningTime="2025-10-08 14:38:04.942005441 +0000 UTC m=+910.092940518" Oct 08 14:38:04 crc kubenswrapper[4624]: I1008 14:38:04.958339 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-775776c574-m7brx" podStartSLOduration=10.451751046 podStartE2EDuration="36.958319954s" podCreationTimestamp="2025-10-08 14:37:28 +0000 UTC" firstStartedPulling="2025-10-08 14:37:30.528202952 +0000 UTC m=+875.679138029" lastFinishedPulling="2025-10-08 14:37:57.03477186 +0000 UTC m=+902.185706937" observedRunningTime="2025-10-08 14:38:04.956948502 +0000 UTC m=+910.107883589" watchObservedRunningTime="2025-10-08 14:38:04.958319954 +0000 UTC m=+910.109255031" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.030272 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ntt7z" podStartSLOduration=14.853474935 podStartE2EDuration="38.030250054s" podCreationTimestamp="2025-10-08 14:37:27 +0000 UTC" firstStartedPulling="2025-10-08 14:37:29.344101223 +0000 UTC m=+874.495036290" lastFinishedPulling="2025-10-08 14:37:52.520876332 +0000 UTC m=+897.671811409" observedRunningTime="2025-10-08 14:38:05.025969573 +0000 UTC m=+910.176904660" watchObservedRunningTime="2025-10-08 14:38:05.030250054 +0000 UTC m=+910.181185131" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.030389 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vzbcj" podStartSLOduration=8.465183852 podStartE2EDuration="37.030381997s" podCreationTimestamp="2025-10-08 14:37:28 +0000 UTC" firstStartedPulling="2025-10-08 14:37:30.539419538 +0000 UTC m=+875.690354615" lastFinishedPulling="2025-10-08 14:37:59.104617683 +0000 UTC m=+904.255552760" observedRunningTime="2025-10-08 14:38:05.006773932 +0000 UTC m=+910.157709009" watchObservedRunningTime="2025-10-08 14:38:05.030381997 +0000 UTC m=+910.181317074" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.059505 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-bxlpf" podStartSLOduration=10.834616322 podStartE2EDuration="38.059474281s" podCreationTimestamp="2025-10-08 14:37:27 +0000 UTC" firstStartedPulling="2025-10-08 14:37:29.809978983 +0000 UTC m=+874.960914060" lastFinishedPulling="2025-10-08 14:37:57.034836942 +0000 UTC m=+902.185772019" observedRunningTime="2025-10-08 14:38:05.057788781 +0000 UTC m=+910.208723878" watchObservedRunningTime="2025-10-08 14:38:05.059474281 +0000 UTC m=+910.210409358" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.089057 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-fr9v7" podStartSLOduration=8.864656247 podStartE2EDuration="38.089027875s" podCreationTimestamp="2025-10-08 14:37:27 +0000 UTC" firstStartedPulling="2025-10-08 14:37:29.934687765 +0000 UTC m=+875.085622842" lastFinishedPulling="2025-10-08 14:37:59.159059393 +0000 UTC m=+904.309994470" observedRunningTime="2025-10-08 14:38:05.083804502 +0000 UTC m=+910.234739579" watchObservedRunningTime="2025-10-08 14:38:05.089027875 +0000 UTC m=+910.239962962" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.103811 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-8zzlr" podStartSLOduration=8.928741523 podStartE2EDuration="38.103781062s" podCreationTimestamp="2025-10-08 14:37:27 +0000 UTC" firstStartedPulling="2025-10-08 14:37:29.972018818 +0000 UTC m=+875.122953895" lastFinishedPulling="2025-10-08 14:37:59.147058357 +0000 UTC m=+904.297993434" observedRunningTime="2025-10-08 14:38:05.10155951 +0000 UTC m=+910.252494587" watchObservedRunningTime="2025-10-08 14:38:05.103781062 +0000 UTC m=+910.254716139" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.124546 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-zljbd" podStartSLOduration=9.58934371 podStartE2EDuration="38.124522019s" podCreationTimestamp="2025-10-08 14:37:27 +0000 UTC" firstStartedPulling="2025-10-08 14:37:30.591244291 +0000 UTC m=+875.742179368" lastFinishedPulling="2025-10-08 14:37:59.1264226 +0000 UTC m=+904.277357677" observedRunningTime="2025-10-08 14:38:05.114077944 +0000 UTC m=+910.265013021" watchObservedRunningTime="2025-10-08 14:38:05.124522019 +0000 UTC m=+910.275457096" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.139386 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-hm59s" podStartSLOduration=12.492456289 podStartE2EDuration="38.139368628s" podCreationTimestamp="2025-10-08 14:37:27 +0000 UTC" firstStartedPulling="2025-10-08 14:37:30.539845849 +0000 UTC m=+875.690780926" lastFinishedPulling="2025-10-08 14:37:56.186758188 +0000 UTC m=+901.337693265" observedRunningTime="2025-10-08 14:38:05.132802504 +0000 UTC m=+910.283737581" watchObservedRunningTime="2025-10-08 14:38:05.139368628 +0000 UTC m=+910.290303705" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.151072 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv" podStartSLOduration=8.777291131 podStartE2EDuration="37.151051812s" podCreationTimestamp="2025-10-08 14:37:28 +0000 UTC" firstStartedPulling="2025-10-08 14:37:30.572650126 +0000 UTC m=+875.723585203" lastFinishedPulling="2025-10-08 14:37:58.946410807 +0000 UTC m=+904.097345884" observedRunningTime="2025-10-08 14:38:05.150687184 +0000 UTC m=+910.301622271" watchObservedRunningTime="2025-10-08 14:38:05.151051812 +0000 UTC m=+910.301986889" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.198021 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-5nntt" podStartSLOduration=11.600783855 podStartE2EDuration="38.197997016s" podCreationTimestamp="2025-10-08 14:37:27 +0000 UTC" firstStartedPulling="2025-10-08 14:37:30.438893053 +0000 UTC m=+875.589828130" lastFinishedPulling="2025-10-08 14:37:57.036106214 +0000 UTC m=+902.187041291" observedRunningTime="2025-10-08 14:38:05.197073014 +0000 UTC m=+910.348008101" watchObservedRunningTime="2025-10-08 14:38:05.197997016 +0000 UTC m=+910.348932093" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.200129 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" podStartSLOduration=9.850260485 podStartE2EDuration="38.200111765s" podCreationTimestamp="2025-10-08 14:37:27 +0000 UTC" firstStartedPulling="2025-10-08 14:37:30.758506609 +0000 UTC m=+875.909441686" lastFinishedPulling="2025-10-08 14:37:59.108357889 +0000 UTC m=+904.259292966" observedRunningTime="2025-10-08 14:38:05.181473987 +0000 UTC m=+910.332409074" watchObservedRunningTime="2025-10-08 14:38:05.200111765 +0000 UTC m=+910.351046852" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.219069 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-zkxcm" podStartSLOduration=3.807831215 podStartE2EDuration="38.2190476s" podCreationTimestamp="2025-10-08 14:37:27 +0000 UTC" firstStartedPulling="2025-10-08 14:37:29.960873424 +0000 UTC m=+875.111808501" lastFinishedPulling="2025-10-08 14:38:04.372089809 +0000 UTC m=+909.523024886" observedRunningTime="2025-10-08 14:38:05.216265225 +0000 UTC m=+910.367200312" watchObservedRunningTime="2025-10-08 14:38:05.2190476 +0000 UTC m=+910.369982677" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.242861 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-664664cb68-5bwth" podStartSLOduration=11.622267021999999 podStartE2EDuration="37.242836209s" podCreationTimestamp="2025-10-08 14:37:28 +0000 UTC" firstStartedPulling="2025-10-08 14:37:30.566048617 +0000 UTC m=+875.716983694" lastFinishedPulling="2025-10-08 14:37:56.186617814 +0000 UTC m=+901.337552881" observedRunningTime="2025-10-08 14:38:05.238468086 +0000 UTC m=+910.389403163" watchObservedRunningTime="2025-10-08 14:38:05.242836209 +0000 UTC m=+910.393771286" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.256969 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-fnwtr" podStartSLOduration=12.042075617 podStartE2EDuration="38.25692146s" podCreationTimestamp="2025-10-08 14:37:27 +0000 UTC" firstStartedPulling="2025-10-08 14:37:29.971680189 +0000 UTC m=+875.122615266" lastFinishedPulling="2025-10-08 14:37:56.186526032 +0000 UTC m=+901.337461109" observedRunningTime="2025-10-08 14:38:05.256755366 +0000 UTC m=+910.407690443" watchObservedRunningTime="2025-10-08 14:38:05.25692146 +0000 UTC m=+910.407856537" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.288119 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd" podStartSLOduration=11.140063815 podStartE2EDuration="37.288098603s" podCreationTimestamp="2025-10-08 14:37:28 +0000 UTC" firstStartedPulling="2025-10-08 14:37:30.886727422 +0000 UTC m=+876.037662499" lastFinishedPulling="2025-10-08 14:37:57.03476221 +0000 UTC m=+902.185697287" observedRunningTime="2025-10-08 14:38:05.287020717 +0000 UTC m=+910.437955794" watchObservedRunningTime="2025-10-08 14:38:05.288098603 +0000 UTC m=+910.439033680" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.310115 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-qncx7" podStartSLOduration=12.74153526 podStartE2EDuration="38.310094539s" podCreationTimestamp="2025-10-08 14:37:27 +0000 UTC" firstStartedPulling="2025-10-08 14:37:30.619512312 +0000 UTC m=+875.770447389" lastFinishedPulling="2025-10-08 14:37:56.188071601 +0000 UTC m=+901.339006668" observedRunningTime="2025-10-08 14:38:05.306401623 +0000 UTC m=+910.457336720" watchObservedRunningTime="2025-10-08 14:38:05.310094539 +0000 UTC m=+910.461029616" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.341534 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-tg58g" podStartSLOduration=4.398203747 podStartE2EDuration="38.341511678s" podCreationTimestamp="2025-10-08 14:37:27 +0000 UTC" firstStartedPulling="2025-10-08 14:37:30.430392086 +0000 UTC m=+875.581327163" lastFinishedPulling="2025-10-08 14:38:04.373700017 +0000 UTC m=+909.524635094" observedRunningTime="2025-10-08 14:38:05.324186641 +0000 UTC m=+910.475121718" watchObservedRunningTime="2025-10-08 14:38:05.341511678 +0000 UTC m=+910.492446755" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.940699 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-bxlpf" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.942195 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-hm59s" Oct 08 14:38:05 crc kubenswrapper[4624]: I1008 14:38:05.942416 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-rdwmx" Oct 08 14:38:08 crc kubenswrapper[4624]: I1008 14:38:08.079182 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ntt7z" Oct 08 14:38:08 crc kubenswrapper[4624]: I1008 14:38:08.081500 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ntt7z" Oct 08 14:38:08 crc kubenswrapper[4624]: I1008 14:38:08.245726 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-fnwtr" Oct 08 14:38:08 crc kubenswrapper[4624]: I1008 14:38:08.247708 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-fnwtr" Oct 08 14:38:08 crc kubenswrapper[4624]: I1008 14:38:08.314666 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-zljbd" Oct 08 14:38:08 crc kubenswrapper[4624]: I1008 14:38:08.510546 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-fr9v7" Oct 08 14:38:08 crc kubenswrapper[4624]: I1008 14:38:08.512755 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-fr9v7" Oct 08 14:38:08 crc kubenswrapper[4624]: I1008 14:38:08.518771 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-59578bc799-q7qf9" Oct 08 14:38:08 crc kubenswrapper[4624]: I1008 14:38:08.588729 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-8zzlr" Oct 08 14:38:08 crc kubenswrapper[4624]: I1008 14:38:08.882565 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vzbcj" Oct 08 14:38:08 crc kubenswrapper[4624]: I1008 14:38:08.884268 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vzbcj" Oct 08 14:38:09 crc kubenswrapper[4624]: I1008 14:38:09.775873 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-bhvnd" Oct 08 14:38:10 crc kubenswrapper[4624]: I1008 14:38:10.182476 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd" Oct 08 14:38:10 crc kubenswrapper[4624]: I1008 14:38:10.188477 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd" Oct 08 14:38:11 crc kubenswrapper[4624]: E1008 14:38:11.468919 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:e4ae07e859166fc5e2cb4f8e0e2c3358b9d2e2d6721a3864d2e0c651d36698ca\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt" podUID="60b85252-6e34-43c5-a048-52fe105f2f93" Oct 08 14:38:12 crc kubenswrapper[4624]: I1008 14:38:12.984037 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f96f8c84-s6lmj" event={"ID":"657a98dc-f421-4483-9354-28eeb59bb8a0","Type":"ContainerStarted","Data":"caff74e301366dfff4a7abe2ecf43ad2d23be6f84b54997c521a123453bd2d3c"} Oct 08 14:38:12 crc kubenswrapper[4624]: I1008 14:38:12.985933 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f96f8c84-s6lmj" Oct 08 14:38:13 crc kubenswrapper[4624]: I1008 14:38:13.005819 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f96f8c84-s6lmj" podStartSLOduration=3.31591984 podStartE2EDuration="45.005801255s" podCreationTimestamp="2025-10-08 14:37:28 +0000 UTC" firstStartedPulling="2025-10-08 14:37:30.468270372 +0000 UTC m=+875.619205449" lastFinishedPulling="2025-10-08 14:38:12.158151797 +0000 UTC m=+917.309086864" observedRunningTime="2025-10-08 14:38:12.999786953 +0000 UTC m=+918.150722030" watchObservedRunningTime="2025-10-08 14:38:13.005801255 +0000 UTC m=+918.156736332" Oct 08 14:38:16 crc kubenswrapper[4624]: I1008 14:38:16.005845 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-74665f6cdc-wklj4" event={"ID":"de7ff9ef-39f5-4521-856d-28c2665e7893","Type":"ContainerStarted","Data":"8e70dab9345668d6cf0d58975538e528084a7b5ed93e857586565ae3cb1c29fb"} Oct 08 14:38:16 crc kubenswrapper[4624]: I1008 14:38:16.007490 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-74665f6cdc-wklj4" Oct 08 14:38:16 crc kubenswrapper[4624]: I1008 14:38:16.039194 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-74665f6cdc-wklj4" podStartSLOduration=3.265450273 podStartE2EDuration="48.039151839s" podCreationTimestamp="2025-10-08 14:37:28 +0000 UTC" firstStartedPulling="2025-10-08 14:37:30.571462246 +0000 UTC m=+875.722397323" lastFinishedPulling="2025-10-08 14:38:15.345163812 +0000 UTC m=+920.496098889" observedRunningTime="2025-10-08 14:38:16.023391609 +0000 UTC m=+921.174326706" watchObservedRunningTime="2025-10-08 14:38:16.039151839 +0000 UTC m=+921.190086926" Oct 08 14:38:18 crc kubenswrapper[4624]: I1008 14:38:18.212750 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-zkxcm" Oct 08 14:38:18 crc kubenswrapper[4624]: I1008 14:38:18.570334 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-tg58g" Oct 08 14:38:18 crc kubenswrapper[4624]: I1008 14:38:18.706712 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f96f8c84-s6lmj" Oct 08 14:38:23 crc kubenswrapper[4624]: I1008 14:38:23.471328 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 14:38:25 crc kubenswrapper[4624]: I1008 14:38:25.082788 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt" event={"ID":"60b85252-6e34-43c5-a048-52fe105f2f93","Type":"ContainerStarted","Data":"c916bd7ed1e2982e5ed7422aa8f4fd0ec0ed3ee56cdafb5ab282a658e37ec5fa"} Oct 08 14:38:25 crc kubenswrapper[4624]: I1008 14:38:25.083347 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt" Oct 08 14:38:25 crc kubenswrapper[4624]: I1008 14:38:25.100920 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt" podStartSLOduration=3.183021328 podStartE2EDuration="57.100897112s" podCreationTimestamp="2025-10-08 14:37:28 +0000 UTC" firstStartedPulling="2025-10-08 14:37:30.572366029 +0000 UTC m=+875.723301106" lastFinishedPulling="2025-10-08 14:38:24.490241823 +0000 UTC m=+929.641176890" observedRunningTime="2025-10-08 14:38:25.09912309 +0000 UTC m=+930.250058167" watchObservedRunningTime="2025-10-08 14:38:25.100897112 +0000 UTC m=+930.251832189" Oct 08 14:38:29 crc kubenswrapper[4624]: I1008 14:38:29.182117 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5dd4499c96-l4cbt" Oct 08 14:38:29 crc kubenswrapper[4624]: I1008 14:38:29.293772 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-74665f6cdc-wklj4" Oct 08 14:38:30 crc kubenswrapper[4624]: I1008 14:38:30.077156 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:38:30 crc kubenswrapper[4624]: I1008 14:38:30.077272 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.668929 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67f8579c9-2x5cl"] Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.670754 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67f8579c9-2x5cl" Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.677541 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.677544 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.677855 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.677699 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-ztf67" Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.713764 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67f8579c9-2x5cl"] Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.765574 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5468b776f7-mwk97"] Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.766952 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5468b776f7-mwk97" Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.771089 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.786052 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5468b776f7-mwk97"] Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.815267 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba6b18e7-9e41-4ee6-9978-e54e6f0ca879-config\") pod \"dnsmasq-dns-67f8579c9-2x5cl\" (UID: \"ba6b18e7-9e41-4ee6-9978-e54e6f0ca879\") " pod="openstack/dnsmasq-dns-67f8579c9-2x5cl" Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.815443 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-596j9\" (UniqueName: \"kubernetes.io/projected/ba6b18e7-9e41-4ee6-9978-e54e6f0ca879-kube-api-access-596j9\") pod \"dnsmasq-dns-67f8579c9-2x5cl\" (UID: \"ba6b18e7-9e41-4ee6-9978-e54e6f0ca879\") " pod="openstack/dnsmasq-dns-67f8579c9-2x5cl" Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.916824 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba6b18e7-9e41-4ee6-9978-e54e6f0ca879-config\") pod \"dnsmasq-dns-67f8579c9-2x5cl\" (UID: \"ba6b18e7-9e41-4ee6-9978-e54e6f0ca879\") " pod="openstack/dnsmasq-dns-67f8579c9-2x5cl" Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.917275 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a37fbac5-8a85-4a8b-a211-59097007ce15-config\") pod \"dnsmasq-dns-5468b776f7-mwk97\" (UID: \"a37fbac5-8a85-4a8b-a211-59097007ce15\") " pod="openstack/dnsmasq-dns-5468b776f7-mwk97" Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.917339 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-596j9\" (UniqueName: \"kubernetes.io/projected/ba6b18e7-9e41-4ee6-9978-e54e6f0ca879-kube-api-access-596j9\") pod \"dnsmasq-dns-67f8579c9-2x5cl\" (UID: \"ba6b18e7-9e41-4ee6-9978-e54e6f0ca879\") " pod="openstack/dnsmasq-dns-67f8579c9-2x5cl" Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.917360 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a37fbac5-8a85-4a8b-a211-59097007ce15-dns-svc\") pod \"dnsmasq-dns-5468b776f7-mwk97\" (UID: \"a37fbac5-8a85-4a8b-a211-59097007ce15\") " pod="openstack/dnsmasq-dns-5468b776f7-mwk97" Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.917401 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmjfr\" (UniqueName: \"kubernetes.io/projected/a37fbac5-8a85-4a8b-a211-59097007ce15-kube-api-access-tmjfr\") pod \"dnsmasq-dns-5468b776f7-mwk97\" (UID: \"a37fbac5-8a85-4a8b-a211-59097007ce15\") " pod="openstack/dnsmasq-dns-5468b776f7-mwk97" Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.917818 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba6b18e7-9e41-4ee6-9978-e54e6f0ca879-config\") pod \"dnsmasq-dns-67f8579c9-2x5cl\" (UID: \"ba6b18e7-9e41-4ee6-9978-e54e6f0ca879\") " pod="openstack/dnsmasq-dns-67f8579c9-2x5cl" Oct 08 14:38:46 crc kubenswrapper[4624]: I1008 14:38:46.938948 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-596j9\" (UniqueName: \"kubernetes.io/projected/ba6b18e7-9e41-4ee6-9978-e54e6f0ca879-kube-api-access-596j9\") pod \"dnsmasq-dns-67f8579c9-2x5cl\" (UID: \"ba6b18e7-9e41-4ee6-9978-e54e6f0ca879\") " pod="openstack/dnsmasq-dns-67f8579c9-2x5cl" Oct 08 14:38:47 crc kubenswrapper[4624]: I1008 14:38:47.005165 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67f8579c9-2x5cl" Oct 08 14:38:47 crc kubenswrapper[4624]: I1008 14:38:47.019176 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a37fbac5-8a85-4a8b-a211-59097007ce15-config\") pod \"dnsmasq-dns-5468b776f7-mwk97\" (UID: \"a37fbac5-8a85-4a8b-a211-59097007ce15\") " pod="openstack/dnsmasq-dns-5468b776f7-mwk97" Oct 08 14:38:47 crc kubenswrapper[4624]: I1008 14:38:47.019260 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a37fbac5-8a85-4a8b-a211-59097007ce15-dns-svc\") pod \"dnsmasq-dns-5468b776f7-mwk97\" (UID: \"a37fbac5-8a85-4a8b-a211-59097007ce15\") " pod="openstack/dnsmasq-dns-5468b776f7-mwk97" Oct 08 14:38:47 crc kubenswrapper[4624]: I1008 14:38:47.019317 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmjfr\" (UniqueName: \"kubernetes.io/projected/a37fbac5-8a85-4a8b-a211-59097007ce15-kube-api-access-tmjfr\") pod \"dnsmasq-dns-5468b776f7-mwk97\" (UID: \"a37fbac5-8a85-4a8b-a211-59097007ce15\") " pod="openstack/dnsmasq-dns-5468b776f7-mwk97" Oct 08 14:38:47 crc kubenswrapper[4624]: I1008 14:38:47.020612 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a37fbac5-8a85-4a8b-a211-59097007ce15-config\") pod \"dnsmasq-dns-5468b776f7-mwk97\" (UID: \"a37fbac5-8a85-4a8b-a211-59097007ce15\") " pod="openstack/dnsmasq-dns-5468b776f7-mwk97" Oct 08 14:38:47 crc kubenswrapper[4624]: I1008 14:38:47.020712 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a37fbac5-8a85-4a8b-a211-59097007ce15-dns-svc\") pod \"dnsmasq-dns-5468b776f7-mwk97\" (UID: \"a37fbac5-8a85-4a8b-a211-59097007ce15\") " pod="openstack/dnsmasq-dns-5468b776f7-mwk97" Oct 08 14:38:47 crc kubenswrapper[4624]: I1008 14:38:47.040492 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmjfr\" (UniqueName: \"kubernetes.io/projected/a37fbac5-8a85-4a8b-a211-59097007ce15-kube-api-access-tmjfr\") pod \"dnsmasq-dns-5468b776f7-mwk97\" (UID: \"a37fbac5-8a85-4a8b-a211-59097007ce15\") " pod="openstack/dnsmasq-dns-5468b776f7-mwk97" Oct 08 14:38:47 crc kubenswrapper[4624]: I1008 14:38:47.085425 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5468b776f7-mwk97" Oct 08 14:38:47 crc kubenswrapper[4624]: I1008 14:38:47.309393 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67f8579c9-2x5cl"] Oct 08 14:38:47 crc kubenswrapper[4624]: I1008 14:38:47.382144 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5468b776f7-mwk97"] Oct 08 14:38:47 crc kubenswrapper[4624]: W1008 14:38:47.383305 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda37fbac5_8a85_4a8b_a211_59097007ce15.slice/crio-54b7471dcca6fef1a4cedfce5a5c1ebb0d223028563add46bfdbf985efa7e948 WatchSource:0}: Error finding container 54b7471dcca6fef1a4cedfce5a5c1ebb0d223028563add46bfdbf985efa7e948: Status 404 returned error can't find the container with id 54b7471dcca6fef1a4cedfce5a5c1ebb0d223028563add46bfdbf985efa7e948 Oct 08 14:38:48 crc kubenswrapper[4624]: I1008 14:38:48.287967 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67f8579c9-2x5cl" event={"ID":"ba6b18e7-9e41-4ee6-9978-e54e6f0ca879","Type":"ContainerStarted","Data":"703e9b6f4da39ebcba4686ab24f8f7a0b94192aecdae66538e7c04dbd2fff197"} Oct 08 14:38:48 crc kubenswrapper[4624]: I1008 14:38:48.289954 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5468b776f7-mwk97" event={"ID":"a37fbac5-8a85-4a8b-a211-59097007ce15","Type":"ContainerStarted","Data":"54b7471dcca6fef1a4cedfce5a5c1ebb0d223028563add46bfdbf985efa7e948"} Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.431009 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5468b776f7-mwk97"] Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.460885 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8644f8f897-4b6st"] Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.513754 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8644f8f897-4b6st" Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.575253 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8644f8f897-4b6st"] Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.590226 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8cn5\" (UniqueName: \"kubernetes.io/projected/48748e1e-577a-4e2d-bd61-b80181dd1dfb-kube-api-access-c8cn5\") pod \"dnsmasq-dns-8644f8f897-4b6st\" (UID: \"48748e1e-577a-4e2d-bd61-b80181dd1dfb\") " pod="openstack/dnsmasq-dns-8644f8f897-4b6st" Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.590353 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48748e1e-577a-4e2d-bd61-b80181dd1dfb-config\") pod \"dnsmasq-dns-8644f8f897-4b6st\" (UID: \"48748e1e-577a-4e2d-bd61-b80181dd1dfb\") " pod="openstack/dnsmasq-dns-8644f8f897-4b6st" Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.590382 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48748e1e-577a-4e2d-bd61-b80181dd1dfb-dns-svc\") pod \"dnsmasq-dns-8644f8f897-4b6st\" (UID: \"48748e1e-577a-4e2d-bd61-b80181dd1dfb\") " pod="openstack/dnsmasq-dns-8644f8f897-4b6st" Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.691409 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48748e1e-577a-4e2d-bd61-b80181dd1dfb-config\") pod \"dnsmasq-dns-8644f8f897-4b6st\" (UID: \"48748e1e-577a-4e2d-bd61-b80181dd1dfb\") " pod="openstack/dnsmasq-dns-8644f8f897-4b6st" Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.691452 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48748e1e-577a-4e2d-bd61-b80181dd1dfb-dns-svc\") pod \"dnsmasq-dns-8644f8f897-4b6st\" (UID: \"48748e1e-577a-4e2d-bd61-b80181dd1dfb\") " pod="openstack/dnsmasq-dns-8644f8f897-4b6st" Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.691527 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8cn5\" (UniqueName: \"kubernetes.io/projected/48748e1e-577a-4e2d-bd61-b80181dd1dfb-kube-api-access-c8cn5\") pod \"dnsmasq-dns-8644f8f897-4b6st\" (UID: \"48748e1e-577a-4e2d-bd61-b80181dd1dfb\") " pod="openstack/dnsmasq-dns-8644f8f897-4b6st" Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.693753 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48748e1e-577a-4e2d-bd61-b80181dd1dfb-config\") pod \"dnsmasq-dns-8644f8f897-4b6st\" (UID: \"48748e1e-577a-4e2d-bd61-b80181dd1dfb\") " pod="openstack/dnsmasq-dns-8644f8f897-4b6st" Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.694295 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48748e1e-577a-4e2d-bd61-b80181dd1dfb-dns-svc\") pod \"dnsmasq-dns-8644f8f897-4b6st\" (UID: \"48748e1e-577a-4e2d-bd61-b80181dd1dfb\") " pod="openstack/dnsmasq-dns-8644f8f897-4b6st" Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.719532 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8cn5\" (UniqueName: \"kubernetes.io/projected/48748e1e-577a-4e2d-bd61-b80181dd1dfb-kube-api-access-c8cn5\") pod \"dnsmasq-dns-8644f8f897-4b6st\" (UID: \"48748e1e-577a-4e2d-bd61-b80181dd1dfb\") " pod="openstack/dnsmasq-dns-8644f8f897-4b6st" Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.803168 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67f8579c9-2x5cl"] Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.839673 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b857bcbc9-2f55q"] Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.841057 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.860661 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8644f8f897-4b6st" Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.870246 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b857bcbc9-2f55q"] Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.912305 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed2842b-0391-43ac-b25d-2de3a1a994a2-config\") pod \"dnsmasq-dns-5b857bcbc9-2f55q\" (UID: \"8ed2842b-0391-43ac-b25d-2de3a1a994a2\") " pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.912355 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ed2842b-0391-43ac-b25d-2de3a1a994a2-dns-svc\") pod \"dnsmasq-dns-5b857bcbc9-2f55q\" (UID: \"8ed2842b-0391-43ac-b25d-2de3a1a994a2\") " pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" Oct 08 14:38:49 crc kubenswrapper[4624]: I1008 14:38:49.912372 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xwgj\" (UniqueName: \"kubernetes.io/projected/8ed2842b-0391-43ac-b25d-2de3a1a994a2-kube-api-access-8xwgj\") pod \"dnsmasq-dns-5b857bcbc9-2f55q\" (UID: \"8ed2842b-0391-43ac-b25d-2de3a1a994a2\") " pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.013481 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed2842b-0391-43ac-b25d-2de3a1a994a2-config\") pod \"dnsmasq-dns-5b857bcbc9-2f55q\" (UID: \"8ed2842b-0391-43ac-b25d-2de3a1a994a2\") " pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.013537 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ed2842b-0391-43ac-b25d-2de3a1a994a2-dns-svc\") pod \"dnsmasq-dns-5b857bcbc9-2f55q\" (UID: \"8ed2842b-0391-43ac-b25d-2de3a1a994a2\") " pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.013563 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xwgj\" (UniqueName: \"kubernetes.io/projected/8ed2842b-0391-43ac-b25d-2de3a1a994a2-kube-api-access-8xwgj\") pod \"dnsmasq-dns-5b857bcbc9-2f55q\" (UID: \"8ed2842b-0391-43ac-b25d-2de3a1a994a2\") " pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.014587 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed2842b-0391-43ac-b25d-2de3a1a994a2-config\") pod \"dnsmasq-dns-5b857bcbc9-2f55q\" (UID: \"8ed2842b-0391-43ac-b25d-2de3a1a994a2\") " pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.015163 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ed2842b-0391-43ac-b25d-2de3a1a994a2-dns-svc\") pod \"dnsmasq-dns-5b857bcbc9-2f55q\" (UID: \"8ed2842b-0391-43ac-b25d-2de3a1a994a2\") " pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.067543 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xwgj\" (UniqueName: \"kubernetes.io/projected/8ed2842b-0391-43ac-b25d-2de3a1a994a2-kube-api-access-8xwgj\") pod \"dnsmasq-dns-5b857bcbc9-2f55q\" (UID: \"8ed2842b-0391-43ac-b25d-2de3a1a994a2\") " pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.188129 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.620119 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.622381 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.625340 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.625649 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.625848 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7sc5j" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.625960 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.626141 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.626304 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.626410 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.639247 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.686162 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8644f8f897-4b6st"] Oct 08 14:38:50 crc kubenswrapper[4624]: W1008 14:38:50.700188 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48748e1e_577a_4e2d_bd61_b80181dd1dfb.slice/crio-3a921cfb2813b9aebf0337e5946506156b5eca3ad2a76f23e22ab85ee260ae00 WatchSource:0}: Error finding container 3a921cfb2813b9aebf0337e5946506156b5eca3ad2a76f23e22ab85ee260ae00: Status 404 returned error can't find the container with id 3a921cfb2813b9aebf0337e5946506156b5eca3ad2a76f23e22ab85ee260ae00 Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.738664 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0a30b9e8-eac9-4cc7-9197-190a5fea5638-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.738730 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0a30b9e8-eac9-4cc7-9197-190a5fea5638-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.738809 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.738837 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0a30b9e8-eac9-4cc7-9197-190a5fea5638-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.738859 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.738883 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.738924 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsdbr\" (UniqueName: \"kubernetes.io/projected/0a30b9e8-eac9-4cc7-9197-190a5fea5638-kube-api-access-gsdbr\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.738964 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.739001 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.739383 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a30b9e8-eac9-4cc7-9197-190a5fea5638-config-data\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.740827 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0a30b9e8-eac9-4cc7-9197-190a5fea5638-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.798865 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b857bcbc9-2f55q"] Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.842273 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0a30b9e8-eac9-4cc7-9197-190a5fea5638-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.842334 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.842359 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0a30b9e8-eac9-4cc7-9197-190a5fea5638-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.842380 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.842406 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.842446 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsdbr\" (UniqueName: \"kubernetes.io/projected/0a30b9e8-eac9-4cc7-9197-190a5fea5638-kube-api-access-gsdbr\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.842491 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.842528 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.842564 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a30b9e8-eac9-4cc7-9197-190a5fea5638-config-data\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.842615 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0a30b9e8-eac9-4cc7-9197-190a5fea5638-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.842679 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0a30b9e8-eac9-4cc7-9197-190a5fea5638-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.843267 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.844405 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.845092 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a30b9e8-eac9-4cc7-9197-190a5fea5638-config-data\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.845418 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0a30b9e8-eac9-4cc7-9197-190a5fea5638-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.846192 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.846746 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0a30b9e8-eac9-4cc7-9197-190a5fea5638-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.848361 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0a30b9e8-eac9-4cc7-9197-190a5fea5638-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.850052 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.857256 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.858494 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0a30b9e8-eac9-4cc7-9197-190a5fea5638-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.862231 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsdbr\" (UniqueName: \"kubernetes.io/projected/0a30b9e8-eac9-4cc7-9197-190a5fea5638-kube-api-access-gsdbr\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.876751 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " pod="openstack/rabbitmq-server-0" Oct 08 14:38:50 crc kubenswrapper[4624]: I1008 14:38:50.963956 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.062239 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.064723 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.070400 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.070680 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.070400 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.070948 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.070957 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.070996 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.071373 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fqz6q" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.093405 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.150524 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.150570 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5b83d38-1c23-4c71-8629-1ce512ce32f3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.150618 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.150740 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5b83d38-1c23-4c71-8629-1ce512ce32f3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.150771 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pz2f\" (UniqueName: \"kubernetes.io/projected/a5b83d38-1c23-4c71-8629-1ce512ce32f3-kube-api-access-9pz2f\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.150801 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.150827 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5b83d38-1c23-4c71-8629-1ce512ce32f3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.150940 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.151007 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.151076 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5b83d38-1c23-4c71-8629-1ce512ce32f3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.151199 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5b83d38-1c23-4c71-8629-1ce512ce32f3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.252391 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5b83d38-1c23-4c71-8629-1ce512ce32f3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.252472 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5b83d38-1c23-4c71-8629-1ce512ce32f3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.252515 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.252540 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5b83d38-1c23-4c71-8629-1ce512ce32f3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.252574 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.252600 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5b83d38-1c23-4c71-8629-1ce512ce32f3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.252624 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pz2f\" (UniqueName: \"kubernetes.io/projected/a5b83d38-1c23-4c71-8629-1ce512ce32f3-kube-api-access-9pz2f\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.252676 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.252702 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5b83d38-1c23-4c71-8629-1ce512ce32f3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.252740 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.252769 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.254129 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.255231 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.255777 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.255870 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5b83d38-1c23-4c71-8629-1ce512ce32f3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.256131 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5b83d38-1c23-4c71-8629-1ce512ce32f3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.256566 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5b83d38-1c23-4c71-8629-1ce512ce32f3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.263067 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5b83d38-1c23-4c71-8629-1ce512ce32f3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.263091 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.263438 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5b83d38-1c23-4c71-8629-1ce512ce32f3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.263885 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.275200 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.279518 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pz2f\" (UniqueName: \"kubernetes.io/projected/a5b83d38-1c23-4c71-8629-1ce512ce32f3-kube-api-access-9pz2f\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.332820 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" event={"ID":"8ed2842b-0391-43ac-b25d-2de3a1a994a2","Type":"ContainerStarted","Data":"0b1027ccd8b40f7e15d67058b1d1378ceb8aa45ecaec0b6dba9ee06e2ad81483"} Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.334167 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8644f8f897-4b6st" event={"ID":"48748e1e-577a-4e2d-bd61-b80181dd1dfb","Type":"ContainerStarted","Data":"3a921cfb2813b9aebf0337e5946506156b5eca3ad2a76f23e22ab85ee260ae00"} Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.399823 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:38:51 crc kubenswrapper[4624]: I1008 14:38:51.500687 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.748068 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.750177 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.760323 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-db2dj" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.760516 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.760653 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.761550 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.762789 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.763090 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.764771 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.863726 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.865036 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.868017 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.868248 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.868369 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.868493 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-2bwhs" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.883040 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.890701 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/91c69013-9ea8-41d8-a439-c85e7ab45e06-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.890804 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/91c69013-9ea8-41d8-a439-c85e7ab45e06-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.890842 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/91c69013-9ea8-41d8-a439-c85e7ab45e06-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.890898 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c69013-9ea8-41d8-a439-c85e7ab45e06-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.890923 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/91c69013-9ea8-41d8-a439-c85e7ab45e06-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.891416 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.891460 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/91c69013-9ea8-41d8-a439-c85e7ab45e06-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.891490 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tcfn\" (UniqueName: \"kubernetes.io/projected/91c69013-9ea8-41d8-a439-c85e7ab45e06-kube-api-access-6tcfn\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.891584 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c69013-9ea8-41d8-a439-c85e7ab45e06-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.992694 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c69013-9ea8-41d8-a439-c85e7ab45e06-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.992750 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/91c69013-9ea8-41d8-a439-c85e7ab45e06-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.992795 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.992816 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/91c69013-9ea8-41d8-a439-c85e7ab45e06-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.992839 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/91c69013-9ea8-41d8-a439-c85e7ab45e06-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.992856 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c69013-9ea8-41d8-a439-c85e7ab45e06-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.992873 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/91c69013-9ea8-41d8-a439-c85e7ab45e06-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.992894 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.992923 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-secrets\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.992968 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5wks\" (UniqueName: \"kubernetes.io/projected/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-kube-api-access-f5wks\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.992989 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.993004 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/91c69013-9ea8-41d8-a439-c85e7ab45e06-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.993021 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tcfn\" (UniqueName: \"kubernetes.io/projected/91c69013-9ea8-41d8-a439-c85e7ab45e06-kube-api-access-6tcfn\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.993039 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.993057 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-kolla-config\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.993074 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.993094 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.993120 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-config-data-default\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.993562 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.993938 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/91c69013-9ea8-41d8-a439-c85e7ab45e06-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.994318 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c69013-9ea8-41d8-a439-c85e7ab45e06-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:53 crc kubenswrapper[4624]: I1008 14:38:53.994914 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/91c69013-9ea8-41d8-a439-c85e7ab45e06-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:53.998510 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/91c69013-9ea8-41d8-a439-c85e7ab45e06-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.003425 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/91c69013-9ea8-41d8-a439-c85e7ab45e06-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.004478 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c69013-9ea8-41d8-a439-c85e7ab45e06-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.011924 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/91c69013-9ea8-41d8-a439-c85e7ab45e06-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.018295 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tcfn\" (UniqueName: \"kubernetes.io/projected/91c69013-9ea8-41d8-a439-c85e7ab45e06-kube-api-access-6tcfn\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.038914 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"91c69013-9ea8-41d8-a439-c85e7ab45e06\") " pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.077805 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.094981 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.095043 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.095074 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-secrets\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.095106 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5wks\" (UniqueName: \"kubernetes.io/projected/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-kube-api-access-f5wks\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.095134 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.095148 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-kolla-config\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.095162 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.095183 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.095209 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-config-data-default\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.096090 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.096912 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-kolla-config\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.097010 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.098444 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.100546 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-config-data-default\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.106785 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-secrets\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.109831 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.128350 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.134463 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5wks\" (UniqueName: \"kubernetes.io/projected/aeedef5f-f3c5-41a3-9a36-bc3830eb12c7-kube-api-access-f5wks\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.167136 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7\") " pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.186880 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.212303 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.213770 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.219577 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.221532 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-s6xjp" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.221997 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.232955 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.400156 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c8ff67a-4da2-47d4-9f73-d7842cdf2712-config-data\") pod \"memcached-0\" (UID: \"5c8ff67a-4da2-47d4-9f73-d7842cdf2712\") " pod="openstack/memcached-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.400231 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vrbc\" (UniqueName: \"kubernetes.io/projected/5c8ff67a-4da2-47d4-9f73-d7842cdf2712-kube-api-access-9vrbc\") pod \"memcached-0\" (UID: \"5c8ff67a-4da2-47d4-9f73-d7842cdf2712\") " pod="openstack/memcached-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.400255 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c8ff67a-4da2-47d4-9f73-d7842cdf2712-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5c8ff67a-4da2-47d4-9f73-d7842cdf2712\") " pod="openstack/memcached-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.400318 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5c8ff67a-4da2-47d4-9f73-d7842cdf2712-kolla-config\") pod \"memcached-0\" (UID: \"5c8ff67a-4da2-47d4-9f73-d7842cdf2712\") " pod="openstack/memcached-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.400342 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c8ff67a-4da2-47d4-9f73-d7842cdf2712-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5c8ff67a-4da2-47d4-9f73-d7842cdf2712\") " pod="openstack/memcached-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.501876 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vrbc\" (UniqueName: \"kubernetes.io/projected/5c8ff67a-4da2-47d4-9f73-d7842cdf2712-kube-api-access-9vrbc\") pod \"memcached-0\" (UID: \"5c8ff67a-4da2-47d4-9f73-d7842cdf2712\") " pod="openstack/memcached-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.501928 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c8ff67a-4da2-47d4-9f73-d7842cdf2712-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5c8ff67a-4da2-47d4-9f73-d7842cdf2712\") " pod="openstack/memcached-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.502001 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5c8ff67a-4da2-47d4-9f73-d7842cdf2712-kolla-config\") pod \"memcached-0\" (UID: \"5c8ff67a-4da2-47d4-9f73-d7842cdf2712\") " pod="openstack/memcached-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.502028 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c8ff67a-4da2-47d4-9f73-d7842cdf2712-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5c8ff67a-4da2-47d4-9f73-d7842cdf2712\") " pod="openstack/memcached-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.502092 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c8ff67a-4da2-47d4-9f73-d7842cdf2712-config-data\") pod \"memcached-0\" (UID: \"5c8ff67a-4da2-47d4-9f73-d7842cdf2712\") " pod="openstack/memcached-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.503123 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c8ff67a-4da2-47d4-9f73-d7842cdf2712-config-data\") pod \"memcached-0\" (UID: \"5c8ff67a-4da2-47d4-9f73-d7842cdf2712\") " pod="openstack/memcached-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.503732 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5c8ff67a-4da2-47d4-9f73-d7842cdf2712-kolla-config\") pod \"memcached-0\" (UID: \"5c8ff67a-4da2-47d4-9f73-d7842cdf2712\") " pod="openstack/memcached-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.505882 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c8ff67a-4da2-47d4-9f73-d7842cdf2712-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5c8ff67a-4da2-47d4-9f73-d7842cdf2712\") " pod="openstack/memcached-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.506272 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c8ff67a-4da2-47d4-9f73-d7842cdf2712-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5c8ff67a-4da2-47d4-9f73-d7842cdf2712\") " pod="openstack/memcached-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.525315 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vrbc\" (UniqueName: \"kubernetes.io/projected/5c8ff67a-4da2-47d4-9f73-d7842cdf2712-kube-api-access-9vrbc\") pod \"memcached-0\" (UID: \"5c8ff67a-4da2-47d4-9f73-d7842cdf2712\") " pod="openstack/memcached-0" Oct 08 14:38:54 crc kubenswrapper[4624]: I1008 14:38:54.536147 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 08 14:38:54 crc kubenswrapper[4624]: W1008 14:38:54.917202 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a30b9e8_eac9_4cc7_9197_190a5fea5638.slice/crio-4f778da0b42c3f06308ee181849c35da55e1e528d349c274d394e81e308083bf WatchSource:0}: Error finding container 4f778da0b42c3f06308ee181849c35da55e1e528d349c274d394e81e308083bf: Status 404 returned error can't find the container with id 4f778da0b42c3f06308ee181849c35da55e1e528d349c274d394e81e308083bf Oct 08 14:38:55 crc kubenswrapper[4624]: I1008 14:38:55.374672 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0a30b9e8-eac9-4cc7-9197-190a5fea5638","Type":"ContainerStarted","Data":"4f778da0b42c3f06308ee181849c35da55e1e528d349c274d394e81e308083bf"} Oct 08 14:38:56 crc kubenswrapper[4624]: I1008 14:38:56.551429 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 14:38:56 crc kubenswrapper[4624]: I1008 14:38:56.552573 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 14:38:56 crc kubenswrapper[4624]: I1008 14:38:56.560000 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-7nm6l" Oct 08 14:38:56 crc kubenswrapper[4624]: I1008 14:38:56.587249 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 14:38:56 crc kubenswrapper[4624]: I1008 14:38:56.739929 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2lv5\" (UniqueName: \"kubernetes.io/projected/65ff7a00-cdff-4601-b7c5-31e1d271cfbb-kube-api-access-t2lv5\") pod \"kube-state-metrics-0\" (UID: \"65ff7a00-cdff-4601-b7c5-31e1d271cfbb\") " pod="openstack/kube-state-metrics-0" Oct 08 14:38:56 crc kubenswrapper[4624]: I1008 14:38:56.844742 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2lv5\" (UniqueName: \"kubernetes.io/projected/65ff7a00-cdff-4601-b7c5-31e1d271cfbb-kube-api-access-t2lv5\") pod \"kube-state-metrics-0\" (UID: \"65ff7a00-cdff-4601-b7c5-31e1d271cfbb\") " pod="openstack/kube-state-metrics-0" Oct 08 14:38:56 crc kubenswrapper[4624]: I1008 14:38:56.887929 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2lv5\" (UniqueName: \"kubernetes.io/projected/65ff7a00-cdff-4601-b7c5-31e1d271cfbb-kube-api-access-t2lv5\") pod \"kube-state-metrics-0\" (UID: \"65ff7a00-cdff-4601-b7c5-31e1d271cfbb\") " pod="openstack/kube-state-metrics-0" Oct 08 14:38:57 crc kubenswrapper[4624]: I1008 14:38:57.174202 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.757779 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-c4zfm"] Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.760004 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:58 crc kubenswrapper[4624]: W1008 14:38:58.761984 4624 reflector.go:561] object-"openstack"/"ovncontroller-ovncontroller-dockercfg-8jgv2": failed to list *v1.Secret: secrets "ovncontroller-ovncontroller-dockercfg-8jgv2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Oct 08 14:38:58 crc kubenswrapper[4624]: E1008 14:38:58.762024 4624 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-ovncontroller-dockercfg-8jgv2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovncontroller-ovncontroller-dockercfg-8jgv2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 14:38:58 crc kubenswrapper[4624]: W1008 14:38:58.762255 4624 reflector.go:561] object-"openstack"/"ovncontroller-scripts": failed to list *v1.ConfigMap: configmaps "ovncontroller-scripts" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Oct 08 14:38:58 crc kubenswrapper[4624]: E1008 14:38:58.762277 4624 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovncontroller-scripts\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 14:38:58 crc kubenswrapper[4624]: W1008 14:38:58.762310 4624 reflector.go:561] object-"openstack"/"cert-ovncontroller-ovndbs": failed to list *v1.Secret: secrets "cert-ovncontroller-ovndbs" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Oct 08 14:38:58 crc kubenswrapper[4624]: E1008 14:38:58.762322 4624 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ovncontroller-ovndbs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-ovncontroller-ovndbs\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.775481 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c4zfm"] Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.848325 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-jhpjx"] Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.850430 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.864158 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jhpjx"] Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.883025 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5312bac-042b-48c5-bf82-1f565e25f11e-var-run-ovn\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.883099 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5312bac-042b-48c5-bf82-1f565e25f11e-combined-ca-bundle\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.883137 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c5312bac-042b-48c5-bf82-1f565e25f11e-var-log-ovn\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.883158 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5312bac-042b-48c5-bf82-1f565e25f11e-ovn-controller-tls-certs\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.883206 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5312bac-042b-48c5-bf82-1f565e25f11e-scripts\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.883275 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c5312bac-042b-48c5-bf82-1f565e25f11e-var-run\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.883307 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgcc7\" (UniqueName: \"kubernetes.io/projected/c5312bac-042b-48c5-bf82-1f565e25f11e-kube-api-access-xgcc7\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.985081 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5312bac-042b-48c5-bf82-1f565e25f11e-var-run-ovn\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.985127 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/72a8ef8e-f40d-4206-af95-2f636093ed51-var-log\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.985171 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5312bac-042b-48c5-bf82-1f565e25f11e-combined-ca-bundle\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.985211 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72a8ef8e-f40d-4206-af95-2f636093ed51-scripts\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.985241 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c5312bac-042b-48c5-bf82-1f565e25f11e-var-log-ovn\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.985264 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5312bac-042b-48c5-bf82-1f565e25f11e-ovn-controller-tls-certs\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.985301 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5312bac-042b-48c5-bf82-1f565e25f11e-scripts\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.985357 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c5312bac-042b-48c5-bf82-1f565e25f11e-var-run\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.985380 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgcc7\" (UniqueName: \"kubernetes.io/projected/c5312bac-042b-48c5-bf82-1f565e25f11e-kube-api-access-xgcc7\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.985410 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/72a8ef8e-f40d-4206-af95-2f636093ed51-var-lib\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.985440 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/72a8ef8e-f40d-4206-af95-2f636093ed51-var-run\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.985486 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hn9k\" (UniqueName: \"kubernetes.io/projected/72a8ef8e-f40d-4206-af95-2f636093ed51-kube-api-access-5hn9k\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.985530 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/72a8ef8e-f40d-4206-af95-2f636093ed51-etc-ovs\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.985557 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5312bac-042b-48c5-bf82-1f565e25f11e-var-run-ovn\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.985693 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c5312bac-042b-48c5-bf82-1f565e25f11e-var-run\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:58 crc kubenswrapper[4624]: I1008 14:38:58.985855 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c5312bac-042b-48c5-bf82-1f565e25f11e-var-log-ovn\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:59 crc kubenswrapper[4624]: I1008 14:38:59.005386 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5312bac-042b-48c5-bf82-1f565e25f11e-combined-ca-bundle\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:59 crc kubenswrapper[4624]: I1008 14:38:59.014131 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgcc7\" (UniqueName: \"kubernetes.io/projected/c5312bac-042b-48c5-bf82-1f565e25f11e-kube-api-access-xgcc7\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:59 crc kubenswrapper[4624]: I1008 14:38:59.086425 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/72a8ef8e-f40d-4206-af95-2f636093ed51-var-lib\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:38:59 crc kubenswrapper[4624]: I1008 14:38:59.086494 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/72a8ef8e-f40d-4206-af95-2f636093ed51-var-run\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:38:59 crc kubenswrapper[4624]: I1008 14:38:59.086559 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hn9k\" (UniqueName: \"kubernetes.io/projected/72a8ef8e-f40d-4206-af95-2f636093ed51-kube-api-access-5hn9k\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:38:59 crc kubenswrapper[4624]: I1008 14:38:59.086611 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/72a8ef8e-f40d-4206-af95-2f636093ed51-etc-ovs\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:38:59 crc kubenswrapper[4624]: I1008 14:38:59.086713 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/72a8ef8e-f40d-4206-af95-2f636093ed51-var-log\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:38:59 crc kubenswrapper[4624]: I1008 14:38:59.086726 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/72a8ef8e-f40d-4206-af95-2f636093ed51-var-run\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:38:59 crc kubenswrapper[4624]: I1008 14:38:59.086740 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72a8ef8e-f40d-4206-af95-2f636093ed51-scripts\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:38:59 crc kubenswrapper[4624]: I1008 14:38:59.087065 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/72a8ef8e-f40d-4206-af95-2f636093ed51-var-lib\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:38:59 crc kubenswrapper[4624]: I1008 14:38:59.087167 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/72a8ef8e-f40d-4206-af95-2f636093ed51-var-log\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:38:59 crc kubenswrapper[4624]: I1008 14:38:59.087360 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/72a8ef8e-f40d-4206-af95-2f636093ed51-etc-ovs\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:38:59 crc kubenswrapper[4624]: I1008 14:38:59.118875 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hn9k\" (UniqueName: \"kubernetes.io/projected/72a8ef8e-f40d-4206-af95-2f636093ed51-kube-api-access-5hn9k\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:38:59 crc kubenswrapper[4624]: I1008 14:38:59.695322 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-8jgv2" Oct 08 14:38:59 crc kubenswrapper[4624]: I1008 14:38:59.975616 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 08 14:38:59 crc kubenswrapper[4624]: I1008 14:38:59.981300 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5312bac-042b-48c5-bf82-1f565e25f11e-ovn-controller-tls-certs\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:38:59 crc kubenswrapper[4624]: E1008 14:38:59.986249 4624 configmap.go:193] Couldn't get configMap openstack/ovncontroller-scripts: failed to sync configmap cache: timed out waiting for the condition Oct 08 14:38:59 crc kubenswrapper[4624]: E1008 14:38:59.986325 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5312bac-042b-48c5-bf82-1f565e25f11e-scripts podName:c5312bac-042b-48c5-bf82-1f565e25f11e nodeName:}" failed. No retries permitted until 2025-10-08 14:39:00.4863045 +0000 UTC m=+965.637239577 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/c5312bac-042b-48c5-bf82-1f565e25f11e-scripts") pod "ovn-controller-c4zfm" (UID: "c5312bac-042b-48c5-bf82-1f565e25f11e") : failed to sync configmap cache: timed out waiting for the condition Oct 08 14:39:00 crc kubenswrapper[4624]: I1008 14:39:00.076230 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:39:00 crc kubenswrapper[4624]: I1008 14:39:00.076294 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:39:00 crc kubenswrapper[4624]: E1008 14:39:00.087732 4624 configmap.go:193] Couldn't get configMap openstack/ovncontroller-scripts: failed to sync configmap cache: timed out waiting for the condition Oct 08 14:39:00 crc kubenswrapper[4624]: E1008 14:39:00.089089 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/72a8ef8e-f40d-4206-af95-2f636093ed51-scripts podName:72a8ef8e-f40d-4206-af95-2f636093ed51 nodeName:}" failed. No retries permitted until 2025-10-08 14:39:00.589069175 +0000 UTC m=+965.740004252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/72a8ef8e-f40d-4206-af95-2f636093ed51-scripts") pod "ovn-controller-ovs-jhpjx" (UID: "72a8ef8e-f40d-4206-af95-2f636093ed51") : failed to sync configmap cache: timed out waiting for the condition Oct 08 14:39:00 crc kubenswrapper[4624]: I1008 14:39:00.327680 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 08 14:39:00 crc kubenswrapper[4624]: I1008 14:39:00.506299 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5312bac-042b-48c5-bf82-1f565e25f11e-scripts\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:39:00 crc kubenswrapper[4624]: I1008 14:39:00.508516 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5312bac-042b-48c5-bf82-1f565e25f11e-scripts\") pod \"ovn-controller-c4zfm\" (UID: \"c5312bac-042b-48c5-bf82-1f565e25f11e\") " pod="openstack/ovn-controller-c4zfm" Oct 08 14:39:00 crc kubenswrapper[4624]: I1008 14:39:00.603095 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4zfm" Oct 08 14:39:00 crc kubenswrapper[4624]: I1008 14:39:00.608284 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72a8ef8e-f40d-4206-af95-2f636093ed51-scripts\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:39:00 crc kubenswrapper[4624]: I1008 14:39:00.610848 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72a8ef8e-f40d-4206-af95-2f636093ed51-scripts\") pod \"ovn-controller-ovs-jhpjx\" (UID: \"72a8ef8e-f40d-4206-af95-2f636093ed51\") " pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:39:00 crc kubenswrapper[4624]: I1008 14:39:00.667572 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.133059 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.134417 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.136684 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.137055 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.137097 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.137962 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-6hlzh" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.139010 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.151154 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.216998 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/85e0be3f-32a4-42c9-9fe5-f3bfca740477-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.217088 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/85e0be3f-32a4-42c9-9fe5-f3bfca740477-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.217106 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/85e0be3f-32a4-42c9-9fe5-f3bfca740477-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.217128 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85e0be3f-32a4-42c9-9fe5-f3bfca740477-config\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.217182 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.217203 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/85e0be3f-32a4-42c9-9fe5-f3bfca740477-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.217244 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e0be3f-32a4-42c9-9fe5-f3bfca740477-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.217305 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ss6j\" (UniqueName: \"kubernetes.io/projected/85e0be3f-32a4-42c9-9fe5-f3bfca740477-kube-api-access-4ss6j\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.318843 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e0be3f-32a4-42c9-9fe5-f3bfca740477-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.319194 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ss6j\" (UniqueName: \"kubernetes.io/projected/85e0be3f-32a4-42c9-9fe5-f3bfca740477-kube-api-access-4ss6j\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.319370 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/85e0be3f-32a4-42c9-9fe5-f3bfca740477-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.319480 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/85e0be3f-32a4-42c9-9fe5-f3bfca740477-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.319579 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/85e0be3f-32a4-42c9-9fe5-f3bfca740477-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.319710 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85e0be3f-32a4-42c9-9fe5-f3bfca740477-config\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.319826 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.319910 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/85e0be3f-32a4-42c9-9fe5-f3bfca740477-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.319996 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/85e0be3f-32a4-42c9-9fe5-f3bfca740477-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.320240 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.320504 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85e0be3f-32a4-42c9-9fe5-f3bfca740477-config\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.320845 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/85e0be3f-32a4-42c9-9fe5-f3bfca740477-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.327616 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/85e0be3f-32a4-42c9-9fe5-f3bfca740477-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.328364 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e0be3f-32a4-42c9-9fe5-f3bfca740477-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.329900 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/85e0be3f-32a4-42c9-9fe5-f3bfca740477-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.339504 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ss6j\" (UniqueName: \"kubernetes.io/projected/85e0be3f-32a4-42c9-9fe5-f3bfca740477-kube-api-access-4ss6j\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.339872 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"85e0be3f-32a4-42c9-9fe5-f3bfca740477\") " pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:01 crc kubenswrapper[4624]: I1008 14:39:01.473427 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.774547 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.776604 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.780321 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.780609 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.780706 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-jndhb" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.780886 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.784947 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.866495 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2799503e-5a0b-4631-9946-f335b8446b53-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.866572 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2799503e-5a0b-4631-9946-f335b8446b53-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.866602 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2799503e-5a0b-4631-9946-f335b8446b53-config\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.866733 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2799503e-5a0b-4631-9946-f335b8446b53-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.868461 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2799503e-5a0b-4631-9946-f335b8446b53-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.868487 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwzvp\" (UniqueName: \"kubernetes.io/projected/2799503e-5a0b-4631-9946-f335b8446b53-kube-api-access-zwzvp\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.868574 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.868606 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2799503e-5a0b-4631-9946-f335b8446b53-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.970340 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2799503e-5a0b-4631-9946-f335b8446b53-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.970396 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwzvp\" (UniqueName: \"kubernetes.io/projected/2799503e-5a0b-4631-9946-f335b8446b53-kube-api-access-zwzvp\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.970523 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.970559 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2799503e-5a0b-4631-9946-f335b8446b53-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.970600 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2799503e-5a0b-4631-9946-f335b8446b53-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.970655 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2799503e-5a0b-4631-9946-f335b8446b53-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.970683 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2799503e-5a0b-4631-9946-f335b8446b53-config\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.970735 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2799503e-5a0b-4631-9946-f335b8446b53-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.971052 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.971871 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2799503e-5a0b-4631-9946-f335b8446b53-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.972540 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2799503e-5a0b-4631-9946-f335b8446b53-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.972546 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2799503e-5a0b-4631-9946-f335b8446b53-config\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.983054 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2799503e-5a0b-4631-9946-f335b8446b53-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.989520 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2799503e-5a0b-4631-9946-f335b8446b53-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:03 crc kubenswrapper[4624]: I1008 14:39:03.999881 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2799503e-5a0b-4631-9946-f335b8446b53-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:04 crc kubenswrapper[4624]: I1008 14:39:04.010512 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwzvp\" (UniqueName: \"kubernetes.io/projected/2799503e-5a0b-4631-9946-f335b8446b53-kube-api-access-zwzvp\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:04 crc kubenswrapper[4624]: I1008 14:39:04.035119 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2799503e-5a0b-4631-9946-f335b8446b53\") " pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:04 crc kubenswrapper[4624]: I1008 14:39:04.146495 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:08 crc kubenswrapper[4624]: E1008 14:39:08.618451 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-neutron-server:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:39:08 crc kubenswrapper[4624]: E1008 14:39:08.618874 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-neutron-server:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:39:08 crc kubenswrapper[4624]: E1008 14:39:08.619032 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-neutron-server:b78cfc68a577b1553523c8a70a34e297,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmjfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5468b776f7-mwk97_openstack(a37fbac5-8a85-4a8b-a211-59097007ce15): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:39:08 crc kubenswrapper[4624]: E1008 14:39:08.620906 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5468b776f7-mwk97" podUID="a37fbac5-8a85-4a8b-a211-59097007ce15" Oct 08 14:39:08 crc kubenswrapper[4624]: E1008 14:39:08.705654 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-neutron-server:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:39:08 crc kubenswrapper[4624]: E1008 14:39:08.705763 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-neutron-server:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:39:08 crc kubenswrapper[4624]: E1008 14:39:08.705884 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-neutron-server:b78cfc68a577b1553523c8a70a34e297,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-596j9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-67f8579c9-2x5cl_openstack(ba6b18e7-9e41-4ee6-9978-e54e6f0ca879): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:39:08 crc kubenswrapper[4624]: E1008 14:39:08.707219 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-67f8579c9-2x5cl" podUID="ba6b18e7-9e41-4ee6-9978-e54e6f0ca879" Oct 08 14:39:08 crc kubenswrapper[4624]: I1008 14:39:08.908471 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 14:39:08 crc kubenswrapper[4624]: I1008 14:39:08.919006 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 08 14:39:09 crc kubenswrapper[4624]: I1008 14:39:09.059977 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 14:39:09 crc kubenswrapper[4624]: I1008 14:39:09.064868 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 08 14:39:10 crc kubenswrapper[4624]: W1008 14:39:10.120263 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaeedef5f_f3c5_41a3_9a36_bc3830eb12c7.slice/crio-1efd8e642632b4729946ba4017b509d4ae1f52ed290dd68b8d144038506f45f6 WatchSource:0}: Error finding container 1efd8e642632b4729946ba4017b509d4ae1f52ed290dd68b8d144038506f45f6: Status 404 returned error can't find the container with id 1efd8e642632b4729946ba4017b509d4ae1f52ed290dd68b8d144038506f45f6 Oct 08 14:39:10 crc kubenswrapper[4624]: W1008 14:39:10.124421 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91c69013_9ea8_41d8_a439_c85e7ab45e06.slice/crio-b28a2ace741741363f54f013f560225232045208ef62ee63dec2d47cc7c7bfb2 WatchSource:0}: Error finding container b28a2ace741741363f54f013f560225232045208ef62ee63dec2d47cc7c7bfb2: Status 404 returned error can't find the container with id b28a2ace741741363f54f013f560225232045208ef62ee63dec2d47cc7c7bfb2 Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.209146 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5468b776f7-mwk97" Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.223250 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67f8579c9-2x5cl" Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.395043 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmjfr\" (UniqueName: \"kubernetes.io/projected/a37fbac5-8a85-4a8b-a211-59097007ce15-kube-api-access-tmjfr\") pod \"a37fbac5-8a85-4a8b-a211-59097007ce15\" (UID: \"a37fbac5-8a85-4a8b-a211-59097007ce15\") " Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.395124 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba6b18e7-9e41-4ee6-9978-e54e6f0ca879-config\") pod \"ba6b18e7-9e41-4ee6-9978-e54e6f0ca879\" (UID: \"ba6b18e7-9e41-4ee6-9978-e54e6f0ca879\") " Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.395175 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a37fbac5-8a85-4a8b-a211-59097007ce15-dns-svc\") pod \"a37fbac5-8a85-4a8b-a211-59097007ce15\" (UID: \"a37fbac5-8a85-4a8b-a211-59097007ce15\") " Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.395255 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-596j9\" (UniqueName: \"kubernetes.io/projected/ba6b18e7-9e41-4ee6-9978-e54e6f0ca879-kube-api-access-596j9\") pod \"ba6b18e7-9e41-4ee6-9978-e54e6f0ca879\" (UID: \"ba6b18e7-9e41-4ee6-9978-e54e6f0ca879\") " Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.395345 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a37fbac5-8a85-4a8b-a211-59097007ce15-config\") pod \"a37fbac5-8a85-4a8b-a211-59097007ce15\" (UID: \"a37fbac5-8a85-4a8b-a211-59097007ce15\") " Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.396588 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba6b18e7-9e41-4ee6-9978-e54e6f0ca879-config" (OuterVolumeSpecName: "config") pod "ba6b18e7-9e41-4ee6-9978-e54e6f0ca879" (UID: "ba6b18e7-9e41-4ee6-9978-e54e6f0ca879"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.397363 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a37fbac5-8a85-4a8b-a211-59097007ce15-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a37fbac5-8a85-4a8b-a211-59097007ce15" (UID: "a37fbac5-8a85-4a8b-a211-59097007ce15"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.397378 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a37fbac5-8a85-4a8b-a211-59097007ce15-config" (OuterVolumeSpecName: "config") pod "a37fbac5-8a85-4a8b-a211-59097007ce15" (UID: "a37fbac5-8a85-4a8b-a211-59097007ce15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.401912 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba6b18e7-9e41-4ee6-9978-e54e6f0ca879-kube-api-access-596j9" (OuterVolumeSpecName: "kube-api-access-596j9") pod "ba6b18e7-9e41-4ee6-9978-e54e6f0ca879" (UID: "ba6b18e7-9e41-4ee6-9978-e54e6f0ca879"). InnerVolumeSpecName "kube-api-access-596j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.403350 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a37fbac5-8a85-4a8b-a211-59097007ce15-kube-api-access-tmjfr" (OuterVolumeSpecName: "kube-api-access-tmjfr") pod "a37fbac5-8a85-4a8b-a211-59097007ce15" (UID: "a37fbac5-8a85-4a8b-a211-59097007ce15"). InnerVolumeSpecName "kube-api-access-tmjfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.497879 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmjfr\" (UniqueName: \"kubernetes.io/projected/a37fbac5-8a85-4a8b-a211-59097007ce15-kube-api-access-tmjfr\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.498191 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba6b18e7-9e41-4ee6-9978-e54e6f0ca879-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.498202 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a37fbac5-8a85-4a8b-a211-59097007ce15-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.498212 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-596j9\" (UniqueName: \"kubernetes.io/projected/ba6b18e7-9e41-4ee6-9978-e54e6f0ca879-kube-api-access-596j9\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.498222 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a37fbac5-8a85-4a8b-a211-59097007ce15-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.507575 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"5c8ff67a-4da2-47d4-9f73-d7842cdf2712","Type":"ContainerStarted","Data":"121fe45472985cfd33de744d07b821c6a579e9c365e050f3a1ffee8073f0e3ae"} Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.509788 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67f8579c9-2x5cl" event={"ID":"ba6b18e7-9e41-4ee6-9978-e54e6f0ca879","Type":"ContainerDied","Data":"703e9b6f4da39ebcba4686ab24f8f7a0b94192aecdae66538e7c04dbd2fff197"} Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.509812 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67f8579c9-2x5cl" Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.514788 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5468b776f7-mwk97" Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.514812 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5468b776f7-mwk97" event={"ID":"a37fbac5-8a85-4a8b-a211-59097007ce15","Type":"ContainerDied","Data":"54b7471dcca6fef1a4cedfce5a5c1ebb0d223028563add46bfdbf985efa7e948"} Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.531968 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"91c69013-9ea8-41d8-a439-c85e7ab45e06","Type":"ContainerStarted","Data":"b28a2ace741741363f54f013f560225232045208ef62ee63dec2d47cc7c7bfb2"} Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.535735 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5b83d38-1c23-4c71-8629-1ce512ce32f3","Type":"ContainerStarted","Data":"ff0f60d0d061628a028c39ad7ddf493e6b191a9050f3b5bb9999ab53dcfd3a3b"} Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.537659 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7","Type":"ContainerStarted","Data":"1efd8e642632b4729946ba4017b509d4ae1f52ed290dd68b8d144038506f45f6"} Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.580728 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67f8579c9-2x5cl"] Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.587286 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67f8579c9-2x5cl"] Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.624843 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5468b776f7-mwk97"] Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.650145 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5468b776f7-mwk97"] Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.713264 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 08 14:39:10 crc kubenswrapper[4624]: W1008 14:39:10.738082 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85e0be3f_32a4_42c9_9fe5_f3bfca740477.slice/crio-8c34249ce9d000f6fc152fa941892079f34cafc3b0814339169fd95ccae81f32 WatchSource:0}: Error finding container 8c34249ce9d000f6fc152fa941892079f34cafc3b0814339169fd95ccae81f32: Status 404 returned error can't find the container with id 8c34249ce9d000f6fc152fa941892079f34cafc3b0814339169fd95ccae81f32 Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.769495 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c4zfm"] Oct 08 14:39:10 crc kubenswrapper[4624]: I1008 14:39:10.827009 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 14:39:11 crc kubenswrapper[4624]: I1008 14:39:11.012165 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jhpjx"] Oct 08 14:39:11 crc kubenswrapper[4624]: W1008 14:39:11.028797 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72a8ef8e_f40d_4206_af95_2f636093ed51.slice/crio-947cee84bd488d3ccb5150ebd7da3b721d6c032267b55838de1867d0d7279689 WatchSource:0}: Error finding container 947cee84bd488d3ccb5150ebd7da3b721d6c032267b55838de1867d0d7279689: Status 404 returned error can't find the container with id 947cee84bd488d3ccb5150ebd7da3b721d6c032267b55838de1867d0d7279689 Oct 08 14:39:11 crc kubenswrapper[4624]: I1008 14:39:11.098695 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 08 14:39:11 crc kubenswrapper[4624]: W1008 14:39:11.117924 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2799503e_5a0b_4631_9946_f335b8446b53.slice/crio-269c8f28604099ace5cd0a8b5e88b62dc87acfc14c07337e4b68bb8762d93100 WatchSource:0}: Error finding container 269c8f28604099ace5cd0a8b5e88b62dc87acfc14c07337e4b68bb8762d93100: Status 404 returned error can't find the container with id 269c8f28604099ace5cd0a8b5e88b62dc87acfc14c07337e4b68bb8762d93100 Oct 08 14:39:11 crc kubenswrapper[4624]: I1008 14:39:11.481629 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a37fbac5-8a85-4a8b-a211-59097007ce15" path="/var/lib/kubelet/pods/a37fbac5-8a85-4a8b-a211-59097007ce15/volumes" Oct 08 14:39:11 crc kubenswrapper[4624]: I1008 14:39:11.482188 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba6b18e7-9e41-4ee6-9978-e54e6f0ca879" path="/var/lib/kubelet/pods/ba6b18e7-9e41-4ee6-9978-e54e6f0ca879/volumes" Oct 08 14:39:11 crc kubenswrapper[4624]: I1008 14:39:11.546065 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c4zfm" event={"ID":"c5312bac-042b-48c5-bf82-1f565e25f11e","Type":"ContainerStarted","Data":"21335686fb66e75c1a3f1b7bc52c13b836cf967ee6af5fba04012bd21cb6a876"} Oct 08 14:39:11 crc kubenswrapper[4624]: I1008 14:39:11.548082 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jhpjx" event={"ID":"72a8ef8e-f40d-4206-af95-2f636093ed51","Type":"ContainerStarted","Data":"947cee84bd488d3ccb5150ebd7da3b721d6c032267b55838de1867d0d7279689"} Oct 08 14:39:11 crc kubenswrapper[4624]: I1008 14:39:11.553709 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"85e0be3f-32a4-42c9-9fe5-f3bfca740477","Type":"ContainerStarted","Data":"8c34249ce9d000f6fc152fa941892079f34cafc3b0814339169fd95ccae81f32"} Oct 08 14:39:11 crc kubenswrapper[4624]: I1008 14:39:11.557692 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2799503e-5a0b-4631-9946-f335b8446b53","Type":"ContainerStarted","Data":"269c8f28604099ace5cd0a8b5e88b62dc87acfc14c07337e4b68bb8762d93100"} Oct 08 14:39:11 crc kubenswrapper[4624]: I1008 14:39:11.560151 4624 generic.go:334] "Generic (PLEG): container finished" podID="8ed2842b-0391-43ac-b25d-2de3a1a994a2" containerID="0cafc6988dadbb737a180954c705ebacc22308cd36235a6bc2bde73b18cda2ff" exitCode=0 Oct 08 14:39:11 crc kubenswrapper[4624]: I1008 14:39:11.560194 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" event={"ID":"8ed2842b-0391-43ac-b25d-2de3a1a994a2","Type":"ContainerDied","Data":"0cafc6988dadbb737a180954c705ebacc22308cd36235a6bc2bde73b18cda2ff"} Oct 08 14:39:11 crc kubenswrapper[4624]: I1008 14:39:11.562627 4624 generic.go:334] "Generic (PLEG): container finished" podID="48748e1e-577a-4e2d-bd61-b80181dd1dfb" containerID="9ba6e03912bf8350b5d27b3874235fe891688f2dcd981daf57d40c372fc0d0a4" exitCode=0 Oct 08 14:39:11 crc kubenswrapper[4624]: I1008 14:39:11.562696 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8644f8f897-4b6st" event={"ID":"48748e1e-577a-4e2d-bd61-b80181dd1dfb","Type":"ContainerDied","Data":"9ba6e03912bf8350b5d27b3874235fe891688f2dcd981daf57d40c372fc0d0a4"} Oct 08 14:39:11 crc kubenswrapper[4624]: I1008 14:39:11.564030 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"65ff7a00-cdff-4601-b7c5-31e1d271cfbb","Type":"ContainerStarted","Data":"076d4507cbb6caca93f6566f159feb56920de45d11a546f82ebc11a8e04743ce"} Oct 08 14:39:12 crc kubenswrapper[4624]: E1008 14:39:12.129758 4624 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Oct 08 14:39:12 crc kubenswrapper[4624]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/48748e1e-577a-4e2d-bd61-b80181dd1dfb/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 08 14:39:12 crc kubenswrapper[4624]: > podSandboxID="3a921cfb2813b9aebf0337e5946506156b5eca3ad2a76f23e22ab85ee260ae00" Oct 08 14:39:12 crc kubenswrapper[4624]: E1008 14:39:12.130233 4624 kuberuntime_manager.go:1274] "Unhandled Error" err=< Oct 08 14:39:12 crc kubenswrapper[4624]: container &Container{Name:dnsmasq-dns,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-neutron-server:b78cfc68a577b1553523c8a70a34e297,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8cn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8644f8f897-4b6st_openstack(48748e1e-577a-4e2d-bd61-b80181dd1dfb): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/48748e1e-577a-4e2d-bd61-b80181dd1dfb/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 08 14:39:12 crc kubenswrapper[4624]: > logger="UnhandledError" Oct 08 14:39:12 crc kubenswrapper[4624]: E1008 14:39:12.131414 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/48748e1e-577a-4e2d-bd61-b80181dd1dfb/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-8644f8f897-4b6st" podUID="48748e1e-577a-4e2d-bd61-b80181dd1dfb" Oct 08 14:39:12 crc kubenswrapper[4624]: I1008 14:39:12.574793 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0a30b9e8-eac9-4cc7-9197-190a5fea5638","Type":"ContainerStarted","Data":"79675596171e4a7b726ded143df7062a2e6e50f40c24cb3569e705f14551b601"} Oct 08 14:39:12 crc kubenswrapper[4624]: I1008 14:39:12.577491 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5b83d38-1c23-4c71-8629-1ce512ce32f3","Type":"ContainerStarted","Data":"0eb3fe154cf0d81a34eeae0dfabc8556d75b0ab1fc4c823d61450e6eb03828f1"} Oct 08 14:39:12 crc kubenswrapper[4624]: I1008 14:39:12.580945 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" event={"ID":"8ed2842b-0391-43ac-b25d-2de3a1a994a2","Type":"ContainerStarted","Data":"52f10e3fcc5b045c2c2c546cbc2e1f091a3e2a6c7ab9978f49aeb45daad867c7"} Oct 08 14:39:12 crc kubenswrapper[4624]: I1008 14:39:12.581022 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" Oct 08 14:39:12 crc kubenswrapper[4624]: I1008 14:39:12.669003 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" podStartSLOduration=4.099775668 podStartE2EDuration="23.668977508s" podCreationTimestamp="2025-10-08 14:38:49 +0000 UTC" firstStartedPulling="2025-10-08 14:38:50.807779314 +0000 UTC m=+955.958714391" lastFinishedPulling="2025-10-08 14:39:10.376981154 +0000 UTC m=+975.527916231" observedRunningTime="2025-10-08 14:39:12.656407101 +0000 UTC m=+977.807342168" watchObservedRunningTime="2025-10-08 14:39:12.668977508 +0000 UTC m=+977.819912585" Oct 08 14:39:20 crc kubenswrapper[4624]: I1008 14:39:20.189832 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" Oct 08 14:39:20 crc kubenswrapper[4624]: I1008 14:39:20.281052 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8644f8f897-4b6st"] Oct 08 14:39:30 crc kubenswrapper[4624]: E1008 14:39:30.018335 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-ovn-base:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:39:30 crc kubenswrapper[4624]: E1008 14:39:30.018915 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-ovn-base:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:39:30 crc kubenswrapper[4624]: E1008 14:39:30.019027 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-ovn-base:b78cfc68a577b1553523c8a70a34e297,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8h674h647hdch69h597hdh659h597hc8h564hc7h68dh577hddhcbh64ch544h77h575h5cdh5c9h694h64bh5c9h9fh64h564h4h679h58bh8dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5hn9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-jhpjx_openstack(72a8ef8e-f40d-4206-af95-2f636093ed51): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:39:30 crc kubenswrapper[4624]: E1008 14:39:30.020195 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-jhpjx" podUID="72a8ef8e-f40d-4206-af95-2f636093ed51" Oct 08 14:39:30 crc kubenswrapper[4624]: I1008 14:39:30.075911 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:39:30 crc kubenswrapper[4624]: I1008 14:39:30.076184 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:39:30 crc kubenswrapper[4624]: I1008 14:39:30.076283 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:39:30 crc kubenswrapper[4624]: I1008 14:39:30.076971 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4f136ef4518c5e931bf27ca2c72cf8717267538a0000a0bb2738bfc1c0e8cda"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:39:30 crc kubenswrapper[4624]: I1008 14:39:30.077099 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://f4f136ef4518c5e931bf27ca2c72cf8717267538a0000a0bb2738bfc1c0e8cda" gracePeriod=600 Oct 08 14:39:30 crc kubenswrapper[4624]: E1008 14:39:30.200060 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-mariadb:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:39:30 crc kubenswrapper[4624]: E1008 14:39:30.200124 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-mariadb:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:39:30 crc kubenswrapper[4624]: E1008 14:39:30.200332 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-mariadb:b78cfc68a577b1553523c8a70a34e297,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:DB_ROOT_PASSWORD,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:DbRootPassword,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secrets,ReadOnly:true,MountPath:/var/lib/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f5wks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(aeedef5f-f3c5-41a3-9a36-bc3830eb12c7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:39:30 crc kubenswrapper[4624]: E1008 14:39:30.201538 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="aeedef5f-f3c5-41a3-9a36-bc3830eb12c7" Oct 08 14:39:30 crc kubenswrapper[4624]: I1008 14:39:30.714020 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="f4f136ef4518c5e931bf27ca2c72cf8717267538a0000a0bb2738bfc1c0e8cda" exitCode=0 Oct 08 14:39:30 crc kubenswrapper[4624]: I1008 14:39:30.714474 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"f4f136ef4518c5e931bf27ca2c72cf8717267538a0000a0bb2738bfc1c0e8cda"} Oct 08 14:39:30 crc kubenswrapper[4624]: I1008 14:39:30.714528 4624 scope.go:117] "RemoveContainer" containerID="690587e93a403cfefc9b225df6e738e9c67a31458ed49cde23474722023af49a" Oct 08 14:39:30 crc kubenswrapper[4624]: E1008 14:39:30.718257 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/podified-antelope-centos9/openstack-mariadb:b78cfc68a577b1553523c8a70a34e297\\\"\"" pod="openstack/openstack-galera-0" podUID="aeedef5f-f3c5-41a3-9a36-bc3830eb12c7" Oct 08 14:39:30 crc kubenswrapper[4624]: E1008 14:39:30.744044 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/podified-antelope-centos9/openstack-ovn-base:b78cfc68a577b1553523c8a70a34e297\\\"\"" pod="openstack/ovn-controller-ovs-jhpjx" podUID="72a8ef8e-f40d-4206-af95-2f636093ed51" Oct 08 14:39:30 crc kubenswrapper[4624]: E1008 14:39:30.764970 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-ovn-nb-db-server:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:39:30 crc kubenswrapper[4624]: E1008 14:39:30.765019 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-ovn-nb-db-server:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:39:30 crc kubenswrapper[4624]: E1008 14:39:30.765165 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-ovn-nb-db-server:b78cfc68a577b1553523c8a70a34e297,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5fch7bh5fhf7h649h5b8h55fhb5hbbhbdh5dbh696h695h67bh5d4hddhd8hdch5f5h87h5d7h96h698h558h575h68dh699h586hcch688h665h5bdq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ss6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(85e0be3f-32a4-42c9-9fe5-f3bfca740477): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:39:31 crc kubenswrapper[4624]: E1008 14:39:31.030290 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-ovn-sb-db-server:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:39:31 crc kubenswrapper[4624]: E1008 14:39:31.031304 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-ovn-sb-db-server:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:39:31 crc kubenswrapper[4624]: E1008 14:39:31.031466 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-ovn-sb-db-server:b78cfc68a577b1553523c8a70a34e297,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd5h6h54h79h668h5dbh75h7fh546h74hbfh5cbhbh666h54fh7fh658h59ch5d9h648h68bhbfh5c5h699h76h679h5d4h54bhcdh559h599h54fq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwzvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(2799503e-5a0b-4631-9946-f335b8446b53): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:39:31 crc kubenswrapper[4624]: E1008 14:39:31.250519 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-ovn-controller:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:39:31 crc kubenswrapper[4624]: E1008 14:39:31.250606 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-ovn-controller:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:39:31 crc kubenswrapper[4624]: E1008 14:39:31.250845 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-ovn-controller:b78cfc68a577b1553523c8a70a34e297,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8h674h647hdch69h597hdh659h597hc8h564hc7h68dh577hddhcbh64ch544h77h575h5cdh5c9h694h64bh5c9h9fh64h564h4h679h58bh8dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgcc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-c4zfm_openstack(c5312bac-042b-48c5-bf82-1f565e25f11e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:39:31 crc kubenswrapper[4624]: E1008 14:39:31.252182 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-c4zfm" podUID="c5312bac-042b-48c5-bf82-1f565e25f11e" Oct 08 14:39:31 crc kubenswrapper[4624]: E1008 14:39:31.264340 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-mariadb:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:39:31 crc kubenswrapper[4624]: E1008 14:39:31.264421 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-mariadb:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:39:31 crc kubenswrapper[4624]: E1008 14:39:31.264643 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-mariadb:b78cfc68a577b1553523c8a70a34e297,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:DB_ROOT_PASSWORD,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:DbRootPassword,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secrets,ReadOnly:true,MountPath:/var/lib/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6tcfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(91c69013-9ea8-41d8-a439-c85e7ab45e06): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:39:31 crc kubenswrapper[4624]: E1008 14:39:31.266770 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="91c69013-9ea8-41d8-a439-c85e7ab45e06" Oct 08 14:39:31 crc kubenswrapper[4624]: E1008 14:39:31.729470 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/podified-antelope-centos9/openstack-mariadb:b78cfc68a577b1553523c8a70a34e297\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="91c69013-9ea8-41d8-a439-c85e7ab45e06" Oct 08 14:39:31 crc kubenswrapper[4624]: E1008 14:39:31.729761 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/podified-antelope-centos9/openstack-ovn-controller:b78cfc68a577b1553523c8a70a34e297\\\"\"" pod="openstack/ovn-controller-c4zfm" podUID="c5312bac-042b-48c5-bf82-1f565e25f11e" Oct 08 14:39:32 crc kubenswrapper[4624]: E1008 14:39:32.240771 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:9aee425378d2c16cd44177dc54a274b312897f5860a8e78fdfda555a0d79dd71: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Oct 08 14:39:32 crc kubenswrapper[4624]: E1008 14:39:32.241059 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:9aee425378d2c16cd44177dc54a274b312897f5860a8e78fdfda555a0d79dd71: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Oct 08 14:39:32 crc kubenswrapper[4624]: E1008 14:39:32.241246 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t2lv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(65ff7a00-cdff-4601-b7c5-31e1d271cfbb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:9aee425378d2c16cd44177dc54a274b312897f5860a8e78fdfda555a0d79dd71: context canceled" logger="UnhandledError" Oct 08 14:39:32 crc kubenswrapper[4624]: E1008 14:39:32.242463 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:9aee425378d2c16cd44177dc54a274b312897f5860a8e78fdfda555a0d79dd71: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="65ff7a00-cdff-4601-b7c5-31e1d271cfbb" Oct 08 14:39:32 crc kubenswrapper[4624]: I1008 14:39:32.735080 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8644f8f897-4b6st" event={"ID":"48748e1e-577a-4e2d-bd61-b80181dd1dfb","Type":"ContainerStarted","Data":"72cc6de08686bae7d8a11b76d32de1881fd66da709b815368e8b4abd4841e5f9"} Oct 08 14:39:32 crc kubenswrapper[4624]: I1008 14:39:32.737084 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8644f8f897-4b6st" Oct 08 14:39:32 crc kubenswrapper[4624]: I1008 14:39:32.735220 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8644f8f897-4b6st" podUID="48748e1e-577a-4e2d-bd61-b80181dd1dfb" containerName="dnsmasq-dns" containerID="cri-o://72cc6de08686bae7d8a11b76d32de1881fd66da709b815368e8b4abd4841e5f9" gracePeriod=10 Oct 08 14:39:32 crc kubenswrapper[4624]: I1008 14:39:32.741423 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"5c8ff67a-4da2-47d4-9f73-d7842cdf2712","Type":"ContainerStarted","Data":"6b6fd388e14a6ba9ccad442834fb9bbef4a15fab7bc9d93231791e420dd71bdb"} Oct 08 14:39:32 crc kubenswrapper[4624]: I1008 14:39:32.741615 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 08 14:39:32 crc kubenswrapper[4624]: I1008 14:39:32.744037 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"480254b7c9a360984d529299bb00cc5c3bed986a85e8add5205713881a71a8d6"} Oct 08 14:39:32 crc kubenswrapper[4624]: E1008 14:39:32.745562 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="65ff7a00-cdff-4601-b7c5-31e1d271cfbb" Oct 08 14:39:32 crc kubenswrapper[4624]: I1008 14:39:32.762410 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8644f8f897-4b6st" podStartSLOduration=24.129812975 podStartE2EDuration="43.762352222s" podCreationTimestamp="2025-10-08 14:38:49 +0000 UTC" firstStartedPulling="2025-10-08 14:38:50.702967851 +0000 UTC m=+955.853902928" lastFinishedPulling="2025-10-08 14:39:10.335507098 +0000 UTC m=+975.486442175" observedRunningTime="2025-10-08 14:39:32.756064943 +0000 UTC m=+997.907000020" watchObservedRunningTime="2025-10-08 14:39:32.762352222 +0000 UTC m=+997.913287299" Oct 08 14:39:32 crc kubenswrapper[4624]: I1008 14:39:32.823297 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.510783922 podStartE2EDuration="38.823279108s" podCreationTimestamp="2025-10-08 14:38:54 +0000 UTC" firstStartedPulling="2025-10-08 14:39:10.190983564 +0000 UTC m=+975.341918641" lastFinishedPulling="2025-10-08 14:39:31.50347875 +0000 UTC m=+996.654413827" observedRunningTime="2025-10-08 14:39:32.817870112 +0000 UTC m=+997.968805189" watchObservedRunningTime="2025-10-08 14:39:32.823279108 +0000 UTC m=+997.974214185" Oct 08 14:39:33 crc kubenswrapper[4624]: I1008 14:39:33.754096 4624 generic.go:334] "Generic (PLEG): container finished" podID="48748e1e-577a-4e2d-bd61-b80181dd1dfb" containerID="72cc6de08686bae7d8a11b76d32de1881fd66da709b815368e8b4abd4841e5f9" exitCode=0 Oct 08 14:39:33 crc kubenswrapper[4624]: I1008 14:39:33.754166 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8644f8f897-4b6st" event={"ID":"48748e1e-577a-4e2d-bd61-b80181dd1dfb","Type":"ContainerDied","Data":"72cc6de08686bae7d8a11b76d32de1881fd66da709b815368e8b4abd4841e5f9"} Oct 08 14:39:34 crc kubenswrapper[4624]: I1008 14:39:34.638113 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8644f8f897-4b6st" Oct 08 14:39:34 crc kubenswrapper[4624]: I1008 14:39:34.763566 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8644f8f897-4b6st" event={"ID":"48748e1e-577a-4e2d-bd61-b80181dd1dfb","Type":"ContainerDied","Data":"3a921cfb2813b9aebf0337e5946506156b5eca3ad2a76f23e22ab85ee260ae00"} Oct 08 14:39:34 crc kubenswrapper[4624]: I1008 14:39:34.763880 4624 scope.go:117] "RemoveContainer" containerID="72cc6de08686bae7d8a11b76d32de1881fd66da709b815368e8b4abd4841e5f9" Oct 08 14:39:34 crc kubenswrapper[4624]: I1008 14:39:34.763994 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8644f8f897-4b6st" Oct 08 14:39:34 crc kubenswrapper[4624]: I1008 14:39:34.764703 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8cn5\" (UniqueName: \"kubernetes.io/projected/48748e1e-577a-4e2d-bd61-b80181dd1dfb-kube-api-access-c8cn5\") pod \"48748e1e-577a-4e2d-bd61-b80181dd1dfb\" (UID: \"48748e1e-577a-4e2d-bd61-b80181dd1dfb\") " Oct 08 14:39:34 crc kubenswrapper[4624]: I1008 14:39:34.764861 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48748e1e-577a-4e2d-bd61-b80181dd1dfb-config\") pod \"48748e1e-577a-4e2d-bd61-b80181dd1dfb\" (UID: \"48748e1e-577a-4e2d-bd61-b80181dd1dfb\") " Oct 08 14:39:34 crc kubenswrapper[4624]: I1008 14:39:34.764920 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48748e1e-577a-4e2d-bd61-b80181dd1dfb-dns-svc\") pod \"48748e1e-577a-4e2d-bd61-b80181dd1dfb\" (UID: \"48748e1e-577a-4e2d-bd61-b80181dd1dfb\") " Oct 08 14:39:34 crc kubenswrapper[4624]: I1008 14:39:34.774766 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48748e1e-577a-4e2d-bd61-b80181dd1dfb-kube-api-access-c8cn5" (OuterVolumeSpecName: "kube-api-access-c8cn5") pod "48748e1e-577a-4e2d-bd61-b80181dd1dfb" (UID: "48748e1e-577a-4e2d-bd61-b80181dd1dfb"). InnerVolumeSpecName "kube-api-access-c8cn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:39:34 crc kubenswrapper[4624]: I1008 14:39:34.792179 4624 scope.go:117] "RemoveContainer" containerID="9ba6e03912bf8350b5d27b3874235fe891688f2dcd981daf57d40c372fc0d0a4" Oct 08 14:39:34 crc kubenswrapper[4624]: I1008 14:39:34.811636 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48748e1e-577a-4e2d-bd61-b80181dd1dfb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "48748e1e-577a-4e2d-bd61-b80181dd1dfb" (UID: "48748e1e-577a-4e2d-bd61-b80181dd1dfb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:39:34 crc kubenswrapper[4624]: I1008 14:39:34.819484 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48748e1e-577a-4e2d-bd61-b80181dd1dfb-config" (OuterVolumeSpecName: "config") pod "48748e1e-577a-4e2d-bd61-b80181dd1dfb" (UID: "48748e1e-577a-4e2d-bd61-b80181dd1dfb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:39:34 crc kubenswrapper[4624]: E1008 14:39:34.853313 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="2799503e-5a0b-4631-9946-f335b8446b53" Oct 08 14:39:34 crc kubenswrapper[4624]: E1008 14:39:34.861382 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="85e0be3f-32a4-42c9-9fe5-f3bfca740477" Oct 08 14:39:34 crc kubenswrapper[4624]: I1008 14:39:34.866601 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48748e1e-577a-4e2d-bd61-b80181dd1dfb-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:34 crc kubenswrapper[4624]: I1008 14:39:34.866639 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48748e1e-577a-4e2d-bd61-b80181dd1dfb-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:34 crc kubenswrapper[4624]: I1008 14:39:34.866653 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8cn5\" (UniqueName: \"kubernetes.io/projected/48748e1e-577a-4e2d-bd61-b80181dd1dfb-kube-api-access-c8cn5\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:35 crc kubenswrapper[4624]: I1008 14:39:35.105859 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8644f8f897-4b6st"] Oct 08 14:39:35 crc kubenswrapper[4624]: I1008 14:39:35.111938 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8644f8f897-4b6st"] Oct 08 14:39:35 crc kubenswrapper[4624]: I1008 14:39:35.477543 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48748e1e-577a-4e2d-bd61-b80181dd1dfb" path="/var/lib/kubelet/pods/48748e1e-577a-4e2d-bd61-b80181dd1dfb/volumes" Oct 08 14:39:35 crc kubenswrapper[4624]: I1008 14:39:35.777552 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"85e0be3f-32a4-42c9-9fe5-f3bfca740477","Type":"ContainerStarted","Data":"89fc838e5c990f137498910f913a7becb30d6f13e57975901ffc82d47ea1ec88"} Oct 08 14:39:35 crc kubenswrapper[4624]: E1008 14:39:35.781567 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/podified-antelope-centos9/openstack-ovn-nb-db-server:b78cfc68a577b1553523c8a70a34e297\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="85e0be3f-32a4-42c9-9fe5-f3bfca740477" Oct 08 14:39:35 crc kubenswrapper[4624]: I1008 14:39:35.782484 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2799503e-5a0b-4631-9946-f335b8446b53","Type":"ContainerStarted","Data":"86bf64ef46649600ed2a254e10d1bc6fd92b13c4b63b51c9c824c2a1e4436d2b"} Oct 08 14:39:35 crc kubenswrapper[4624]: E1008 14:39:35.786845 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/podified-antelope-centos9/openstack-ovn-sb-db-server:b78cfc68a577b1553523c8a70a34e297\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="2799503e-5a0b-4631-9946-f335b8446b53" Oct 08 14:39:36 crc kubenswrapper[4624]: E1008 14:39:36.791600 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/podified-antelope-centos9/openstack-ovn-sb-db-server:b78cfc68a577b1553523c8a70a34e297\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="2799503e-5a0b-4631-9946-f335b8446b53" Oct 08 14:39:36 crc kubenswrapper[4624]: E1008 14:39:36.791766 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/podified-antelope-centos9/openstack-ovn-nb-db-server:b78cfc68a577b1553523c8a70a34e297\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="85e0be3f-32a4-42c9-9fe5-f3bfca740477" Oct 08 14:39:39 crc kubenswrapper[4624]: I1008 14:39:39.537940 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 08 14:39:43 crc kubenswrapper[4624]: I1008 14:39:43.842218 4624 generic.go:334] "Generic (PLEG): container finished" podID="0a30b9e8-eac9-4cc7-9197-190a5fea5638" containerID="79675596171e4a7b726ded143df7062a2e6e50f40c24cb3569e705f14551b601" exitCode=0 Oct 08 14:39:43 crc kubenswrapper[4624]: I1008 14:39:43.842340 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0a30b9e8-eac9-4cc7-9197-190a5fea5638","Type":"ContainerDied","Data":"79675596171e4a7b726ded143df7062a2e6e50f40c24cb3569e705f14551b601"} Oct 08 14:39:43 crc kubenswrapper[4624]: I1008 14:39:43.848998 4624 generic.go:334] "Generic (PLEG): container finished" podID="a5b83d38-1c23-4c71-8629-1ce512ce32f3" containerID="0eb3fe154cf0d81a34eeae0dfabc8556d75b0ab1fc4c823d61450e6eb03828f1" exitCode=0 Oct 08 14:39:43 crc kubenswrapper[4624]: I1008 14:39:43.849048 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5b83d38-1c23-4c71-8629-1ce512ce32f3","Type":"ContainerDied","Data":"0eb3fe154cf0d81a34eeae0dfabc8556d75b0ab1fc4c823d61450e6eb03828f1"} Oct 08 14:39:44 crc kubenswrapper[4624]: I1008 14:39:44.865570 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7","Type":"ContainerStarted","Data":"b5c676493960ebde3986c12a90d1a8b669c5df9ebeafdbfd0a6f3d13725173d9"} Oct 08 14:39:44 crc kubenswrapper[4624]: I1008 14:39:44.870678 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0a30b9e8-eac9-4cc7-9197-190a5fea5638","Type":"ContainerStarted","Data":"60bf635cbd5fe7b60863dd55b8e2af9566a17fb9ecb6f5709c68f522870b3e6c"} Oct 08 14:39:44 crc kubenswrapper[4624]: I1008 14:39:44.871399 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 08 14:39:44 crc kubenswrapper[4624]: I1008 14:39:44.880007 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"91c69013-9ea8-41d8-a439-c85e7ab45e06","Type":"ContainerStarted","Data":"baecfff2316ec57d4e3c275cfaaa7acd962d24019d9c34f9c71cfbebcacc6a83"} Oct 08 14:39:44 crc kubenswrapper[4624]: I1008 14:39:44.884941 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5b83d38-1c23-4c71-8629-1ce512ce32f3","Type":"ContainerStarted","Data":"9cb2f1e16867aebef2367bd37fdc9ad8faada672dc1208e483441dc47f6a72fc"} Oct 08 14:39:44 crc kubenswrapper[4624]: I1008 14:39:44.885555 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:39:44 crc kubenswrapper[4624]: I1008 14:39:44.928374 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.427538603 podStartE2EDuration="55.928354244s" podCreationTimestamp="2025-10-08 14:38:49 +0000 UTC" firstStartedPulling="2025-10-08 14:38:54.92004874 +0000 UTC m=+960.070983807" lastFinishedPulling="2025-10-08 14:39:10.420864361 +0000 UTC m=+975.571799448" observedRunningTime="2025-10-08 14:39:44.916909116 +0000 UTC m=+1010.067844193" watchObservedRunningTime="2025-10-08 14:39:44.928354244 +0000 UTC m=+1010.079289311" Oct 08 14:39:44 crc kubenswrapper[4624]: I1008 14:39:44.946819 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=54.464481988 podStartE2EDuration="54.94680046s" podCreationTimestamp="2025-10-08 14:38:50 +0000 UTC" firstStartedPulling="2025-10-08 14:39:10.191109957 +0000 UTC m=+975.342045034" lastFinishedPulling="2025-10-08 14:39:10.673428429 +0000 UTC m=+975.824363506" observedRunningTime="2025-10-08 14:39:44.941342122 +0000 UTC m=+1010.092277199" watchObservedRunningTime="2025-10-08 14:39:44.94680046 +0000 UTC m=+1010.097735537" Oct 08 14:39:45 crc kubenswrapper[4624]: I1008 14:39:45.893944 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c4zfm" event={"ID":"c5312bac-042b-48c5-bf82-1f565e25f11e","Type":"ContainerStarted","Data":"2653a9c4baa871ac7f8eeeda89c11894c24cc4698c5fc32a2134bf39b44a896d"} Oct 08 14:39:45 crc kubenswrapper[4624]: I1008 14:39:45.894388 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-c4zfm" Oct 08 14:39:45 crc kubenswrapper[4624]: I1008 14:39:45.896860 4624 generic.go:334] "Generic (PLEG): container finished" podID="72a8ef8e-f40d-4206-af95-2f636093ed51" containerID="049ed757435a9550189f472b86698078677015f7f1f7359d73b39949ed3baaf8" exitCode=0 Oct 08 14:39:45 crc kubenswrapper[4624]: I1008 14:39:45.897183 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jhpjx" event={"ID":"72a8ef8e-f40d-4206-af95-2f636093ed51","Type":"ContainerDied","Data":"049ed757435a9550189f472b86698078677015f7f1f7359d73b39949ed3baaf8"} Oct 08 14:39:45 crc kubenswrapper[4624]: I1008 14:39:45.930276 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-c4zfm" podStartSLOduration=14.115581943 podStartE2EDuration="47.930255188s" podCreationTimestamp="2025-10-08 14:38:58 +0000 UTC" firstStartedPulling="2025-10-08 14:39:10.844229435 +0000 UTC m=+975.995164512" lastFinishedPulling="2025-10-08 14:39:44.65890268 +0000 UTC m=+1009.809837757" observedRunningTime="2025-10-08 14:39:45.91683659 +0000 UTC m=+1011.067771677" watchObservedRunningTime="2025-10-08 14:39:45.930255188 +0000 UTC m=+1011.081190265" Oct 08 14:39:46 crc kubenswrapper[4624]: I1008 14:39:46.906139 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"65ff7a00-cdff-4601-b7c5-31e1d271cfbb","Type":"ContainerStarted","Data":"1b1a577f37721e32b723cf4bf7f834ac15a4fee7bd9748caff127e4bf3c219d6"} Oct 08 14:39:46 crc kubenswrapper[4624]: I1008 14:39:46.906758 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 08 14:39:46 crc kubenswrapper[4624]: I1008 14:39:46.909497 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jhpjx" event={"ID":"72a8ef8e-f40d-4206-af95-2f636093ed51","Type":"ContainerStarted","Data":"62aef8ed50c0b25d4c646b92a64c1e041c55727917c24f2d6e0ecb667ba72607"} Oct 08 14:39:46 crc kubenswrapper[4624]: I1008 14:39:46.909529 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jhpjx" event={"ID":"72a8ef8e-f40d-4206-af95-2f636093ed51","Type":"ContainerStarted","Data":"348d02ff003172482097a076243110b1952562a0b9c7894561081f77bb57528d"} Oct 08 14:39:46 crc kubenswrapper[4624]: I1008 14:39:46.909846 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:39:46 crc kubenswrapper[4624]: I1008 14:39:46.909869 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:39:46 crc kubenswrapper[4624]: I1008 14:39:46.929883 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=15.574627981999999 podStartE2EDuration="50.929866463s" podCreationTimestamp="2025-10-08 14:38:56 +0000 UTC" firstStartedPulling="2025-10-08 14:39:10.877062183 +0000 UTC m=+976.027997260" lastFinishedPulling="2025-10-08 14:39:46.232300664 +0000 UTC m=+1011.383235741" observedRunningTime="2025-10-08 14:39:46.926018916 +0000 UTC m=+1012.076953993" watchObservedRunningTime="2025-10-08 14:39:46.929866463 +0000 UTC m=+1012.080801540" Oct 08 14:39:46 crc kubenswrapper[4624]: I1008 14:39:46.976598 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-jhpjx" podStartSLOduration=15.349322322 podStartE2EDuration="48.976582291s" podCreationTimestamp="2025-10-08 14:38:58 +0000 UTC" firstStartedPulling="2025-10-08 14:39:11.032687927 +0000 UTC m=+976.183623004" lastFinishedPulling="2025-10-08 14:39:44.659947896 +0000 UTC m=+1009.810882973" observedRunningTime="2025-10-08 14:39:46.975521254 +0000 UTC m=+1012.126456331" watchObservedRunningTime="2025-10-08 14:39:46.976582291 +0000 UTC m=+1012.127517368" Oct 08 14:39:47 crc kubenswrapper[4624]: I1008 14:39:47.185128 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58fb6fdf7-djbqf"] Oct 08 14:39:47 crc kubenswrapper[4624]: E1008 14:39:47.185761 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48748e1e-577a-4e2d-bd61-b80181dd1dfb" containerName="init" Oct 08 14:39:47 crc kubenswrapper[4624]: I1008 14:39:47.185774 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="48748e1e-577a-4e2d-bd61-b80181dd1dfb" containerName="init" Oct 08 14:39:47 crc kubenswrapper[4624]: E1008 14:39:47.185796 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48748e1e-577a-4e2d-bd61-b80181dd1dfb" containerName="dnsmasq-dns" Oct 08 14:39:47 crc kubenswrapper[4624]: I1008 14:39:47.185804 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="48748e1e-577a-4e2d-bd61-b80181dd1dfb" containerName="dnsmasq-dns" Oct 08 14:39:47 crc kubenswrapper[4624]: I1008 14:39:47.185950 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="48748e1e-577a-4e2d-bd61-b80181dd1dfb" containerName="dnsmasq-dns" Oct 08 14:39:47 crc kubenswrapper[4624]: I1008 14:39:47.186816 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" Oct 08 14:39:47 crc kubenswrapper[4624]: I1008 14:39:47.208745 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58fb6fdf7-djbqf"] Oct 08 14:39:47 crc kubenswrapper[4624]: I1008 14:39:47.257938 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f8f2b5a-58c2-40be-9b42-6a474e13955a-dns-svc\") pod \"dnsmasq-dns-58fb6fdf7-djbqf\" (UID: \"3f8f2b5a-58c2-40be-9b42-6a474e13955a\") " pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" Oct 08 14:39:47 crc kubenswrapper[4624]: I1008 14:39:47.258020 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f8f2b5a-58c2-40be-9b42-6a474e13955a-config\") pod \"dnsmasq-dns-58fb6fdf7-djbqf\" (UID: \"3f8f2b5a-58c2-40be-9b42-6a474e13955a\") " pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" Oct 08 14:39:47 crc kubenswrapper[4624]: I1008 14:39:47.258116 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr7j2\" (UniqueName: \"kubernetes.io/projected/3f8f2b5a-58c2-40be-9b42-6a474e13955a-kube-api-access-xr7j2\") pod \"dnsmasq-dns-58fb6fdf7-djbqf\" (UID: \"3f8f2b5a-58c2-40be-9b42-6a474e13955a\") " pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" Oct 08 14:39:47 crc kubenswrapper[4624]: I1008 14:39:47.359331 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr7j2\" (UniqueName: \"kubernetes.io/projected/3f8f2b5a-58c2-40be-9b42-6a474e13955a-kube-api-access-xr7j2\") pod \"dnsmasq-dns-58fb6fdf7-djbqf\" (UID: \"3f8f2b5a-58c2-40be-9b42-6a474e13955a\") " pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" Oct 08 14:39:47 crc kubenswrapper[4624]: I1008 14:39:47.359444 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f8f2b5a-58c2-40be-9b42-6a474e13955a-dns-svc\") pod \"dnsmasq-dns-58fb6fdf7-djbqf\" (UID: \"3f8f2b5a-58c2-40be-9b42-6a474e13955a\") " pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" Oct 08 14:39:47 crc kubenswrapper[4624]: I1008 14:39:47.359475 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f8f2b5a-58c2-40be-9b42-6a474e13955a-config\") pod \"dnsmasq-dns-58fb6fdf7-djbqf\" (UID: \"3f8f2b5a-58c2-40be-9b42-6a474e13955a\") " pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" Oct 08 14:39:47 crc kubenswrapper[4624]: I1008 14:39:47.360329 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f8f2b5a-58c2-40be-9b42-6a474e13955a-config\") pod \"dnsmasq-dns-58fb6fdf7-djbqf\" (UID: \"3f8f2b5a-58c2-40be-9b42-6a474e13955a\") " pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" Oct 08 14:39:47 crc kubenswrapper[4624]: I1008 14:39:47.360827 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f8f2b5a-58c2-40be-9b42-6a474e13955a-dns-svc\") pod \"dnsmasq-dns-58fb6fdf7-djbqf\" (UID: \"3f8f2b5a-58c2-40be-9b42-6a474e13955a\") " pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" Oct 08 14:39:47 crc kubenswrapper[4624]: I1008 14:39:47.385436 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr7j2\" (UniqueName: \"kubernetes.io/projected/3f8f2b5a-58c2-40be-9b42-6a474e13955a-kube-api-access-xr7j2\") pod \"dnsmasq-dns-58fb6fdf7-djbqf\" (UID: \"3f8f2b5a-58c2-40be-9b42-6a474e13955a\") " pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" Oct 08 14:39:47 crc kubenswrapper[4624]: I1008 14:39:47.502825 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" Oct 08 14:39:47 crc kubenswrapper[4624]: I1008 14:39:47.830942 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58fb6fdf7-djbqf"] Oct 08 14:39:47 crc kubenswrapper[4624]: W1008 14:39:47.845275 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f8f2b5a_58c2_40be_9b42_6a474e13955a.slice/crio-acd8ea0af80da75221e0aaec8433e9fe3a6a6d29ec2970e4d3b63013416a83ec WatchSource:0}: Error finding container acd8ea0af80da75221e0aaec8433e9fe3a6a6d29ec2970e4d3b63013416a83ec: Status 404 returned error can't find the container with id acd8ea0af80da75221e0aaec8433e9fe3a6a6d29ec2970e4d3b63013416a83ec Oct 08 14:39:47 crc kubenswrapper[4624]: I1008 14:39:47.938512 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" event={"ID":"3f8f2b5a-58c2-40be-9b42-6a474e13955a","Type":"ContainerStarted","Data":"acd8ea0af80da75221e0aaec8433e9fe3a6a6d29ec2970e4d3b63013416a83ec"} Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.371505 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.377417 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.382682 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.382715 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.383742 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.384093 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-sntdm" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.421581 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.477970 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a6d0a5f4-de63-4141-addf-72f5d787cb24-cache\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.478032 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thk5t\" (UniqueName: \"kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-kube-api-access-thk5t\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.478057 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a6d0a5f4-de63-4141-addf-72f5d787cb24-lock\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.478079 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.478110 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.580113 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a6d0a5f4-de63-4141-addf-72f5d787cb24-cache\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.580174 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thk5t\" (UniqueName: \"kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-kube-api-access-thk5t\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.580197 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a6d0a5f4-de63-4141-addf-72f5d787cb24-lock\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.580220 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.580252 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:39:48 crc kubenswrapper[4624]: E1008 14:39:48.580420 4624 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 08 14:39:48 crc kubenswrapper[4624]: E1008 14:39:48.580442 4624 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 08 14:39:48 crc kubenswrapper[4624]: E1008 14:39:48.580497 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift podName:a6d0a5f4-de63-4141-addf-72f5d787cb24 nodeName:}" failed. No retries permitted until 2025-10-08 14:39:49.080474454 +0000 UTC m=+1014.231409531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift") pod "swift-storage-0" (UID: "a6d0a5f4-de63-4141-addf-72f5d787cb24") : configmap "swift-ring-files" not found Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.580606 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.580887 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a6d0a5f4-de63-4141-addf-72f5d787cb24-cache\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.580889 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a6d0a5f4-de63-4141-addf-72f5d787cb24-lock\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.604615 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thk5t\" (UniqueName: \"kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-kube-api-access-thk5t\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.605458 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.946772 4624 generic.go:334] "Generic (PLEG): container finished" podID="3f8f2b5a-58c2-40be-9b42-6a474e13955a" containerID="3b110e4e2907080145bb24fce0ac0bd09ee84cbff3c27dde0436929ceb9d6fae" exitCode=0 Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.946848 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" event={"ID":"3f8f2b5a-58c2-40be-9b42-6a474e13955a","Type":"ContainerDied","Data":"3b110e4e2907080145bb24fce0ac0bd09ee84cbff3c27dde0436929ceb9d6fae"} Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.948898 4624 generic.go:334] "Generic (PLEG): container finished" podID="91c69013-9ea8-41d8-a439-c85e7ab45e06" containerID="baecfff2316ec57d4e3c275cfaaa7acd962d24019d9c34f9c71cfbebcacc6a83" exitCode=0 Oct 08 14:39:48 crc kubenswrapper[4624]: I1008 14:39:48.948946 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"91c69013-9ea8-41d8-a439-c85e7ab45e06","Type":"ContainerDied","Data":"baecfff2316ec57d4e3c275cfaaa7acd962d24019d9c34f9c71cfbebcacc6a83"} Oct 08 14:39:49 crc kubenswrapper[4624]: I1008 14:39:49.088425 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:39:49 crc kubenswrapper[4624]: E1008 14:39:49.088541 4624 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 08 14:39:49 crc kubenswrapper[4624]: E1008 14:39:49.088556 4624 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 08 14:39:49 crc kubenswrapper[4624]: E1008 14:39:49.088600 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift podName:a6d0a5f4-de63-4141-addf-72f5d787cb24 nodeName:}" failed. No retries permitted until 2025-10-08 14:39:50.088584197 +0000 UTC m=+1015.239519284 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift") pod "swift-storage-0" (UID: "a6d0a5f4-de63-4141-addf-72f5d787cb24") : configmap "swift-ring-files" not found Oct 08 14:39:49 crc kubenswrapper[4624]: I1008 14:39:49.960956 4624 generic.go:334] "Generic (PLEG): container finished" podID="aeedef5f-f3c5-41a3-9a36-bc3830eb12c7" containerID="b5c676493960ebde3986c12a90d1a8b669c5df9ebeafdbfd0a6f3d13725173d9" exitCode=0 Oct 08 14:39:49 crc kubenswrapper[4624]: I1008 14:39:49.961245 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7","Type":"ContainerDied","Data":"b5c676493960ebde3986c12a90d1a8b669c5df9ebeafdbfd0a6f3d13725173d9"} Oct 08 14:39:49 crc kubenswrapper[4624]: I1008 14:39:49.965727 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" event={"ID":"3f8f2b5a-58c2-40be-9b42-6a474e13955a","Type":"ContainerStarted","Data":"8ed0d7f93b0a926f9e01991a120d136e88008b2d9de432532d89189af032ef54"} Oct 08 14:39:49 crc kubenswrapper[4624]: I1008 14:39:49.965996 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" Oct 08 14:39:49 crc kubenswrapper[4624]: I1008 14:39:49.968703 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"91c69013-9ea8-41d8-a439-c85e7ab45e06","Type":"ContainerStarted","Data":"8ee5645af8cfcbb4b86e954c7691b83cb60afce797fd94f410d926744096cad0"} Oct 08 14:39:50 crc kubenswrapper[4624]: I1008 14:39:50.061789 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" podStartSLOduration=3.061744434 podStartE2EDuration="3.061744434s" podCreationTimestamp="2025-10-08 14:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:39:50.061211621 +0000 UTC m=+1015.212146698" watchObservedRunningTime="2025-10-08 14:39:50.061744434 +0000 UTC m=+1015.212679511" Oct 08 14:39:50 crc kubenswrapper[4624]: I1008 14:39:50.062236 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=24.561764246 podStartE2EDuration="58.062229607s" podCreationTimestamp="2025-10-08 14:38:52 +0000 UTC" firstStartedPulling="2025-10-08 14:39:10.134383237 +0000 UTC m=+975.285318324" lastFinishedPulling="2025-10-08 14:39:43.634848608 +0000 UTC m=+1008.785783685" observedRunningTime="2025-10-08 14:39:50.040931301 +0000 UTC m=+1015.191866378" watchObservedRunningTime="2025-10-08 14:39:50.062229607 +0000 UTC m=+1015.213164684" Oct 08 14:39:50 crc kubenswrapper[4624]: I1008 14:39:50.106488 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:39:50 crc kubenswrapper[4624]: E1008 14:39:50.108430 4624 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 08 14:39:50 crc kubenswrapper[4624]: E1008 14:39:50.108460 4624 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 08 14:39:50 crc kubenswrapper[4624]: E1008 14:39:50.108506 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift podName:a6d0a5f4-de63-4141-addf-72f5d787cb24 nodeName:}" failed. No retries permitted until 2025-10-08 14:39:52.108489863 +0000 UTC m=+1017.259424940 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift") pod "swift-storage-0" (UID: "a6d0a5f4-de63-4141-addf-72f5d787cb24") : configmap "swift-ring-files" not found Oct 08 14:39:50 crc kubenswrapper[4624]: I1008 14:39:50.978303 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"aeedef5f-f3c5-41a3-9a36-bc3830eb12c7","Type":"ContainerStarted","Data":"b8d93f1c30767c58c96fa887725bbb0ed30df9a5e28981685fb67bcfc5046948"} Oct 08 14:39:51 crc kubenswrapper[4624]: I1008 14:39:51.047032 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371977.807762 podStartE2EDuration="59.047014269s" podCreationTimestamp="2025-10-08 14:38:52 +0000 UTC" firstStartedPulling="2025-10-08 14:39:10.124870247 +0000 UTC m=+975.275805324" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:39:51.040895594 +0000 UTC m=+1016.191830671" watchObservedRunningTime="2025-10-08 14:39:51.047014269 +0000 UTC m=+1016.197949346" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.135939 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:39:52 crc kubenswrapper[4624]: E1008 14:39:52.136139 4624 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 08 14:39:52 crc kubenswrapper[4624]: E1008 14:39:52.136178 4624 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 08 14:39:52 crc kubenswrapper[4624]: E1008 14:39:52.136244 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift podName:a6d0a5f4-de63-4141-addf-72f5d787cb24 nodeName:}" failed. No retries permitted until 2025-10-08 14:39:56.136221124 +0000 UTC m=+1021.287156201 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift") pod "swift-storage-0" (UID: "a6d0a5f4-de63-4141-addf-72f5d787cb24") : configmap "swift-ring-files" not found Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.226400 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-mf55t"] Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.227603 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.232293 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.232844 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.232851 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.237135 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e062236-9287-44c5-bc5e-4c0524767c86-combined-ca-bundle\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.237179 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfnzz\" (UniqueName: \"kubernetes.io/projected/0e062236-9287-44c5-bc5e-4c0524767c86-kube-api-access-dfnzz\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.237210 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e062236-9287-44c5-bc5e-4c0524767c86-etc-swift\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.237235 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e062236-9287-44c5-bc5e-4c0524767c86-ring-data-devices\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.237269 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e062236-9287-44c5-bc5e-4c0524767c86-dispersionconf\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.237350 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e062236-9287-44c5-bc5e-4c0524767c86-scripts\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.237413 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e062236-9287-44c5-bc5e-4c0524767c86-swiftconf\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.250158 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-mf55t"] Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.276083 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-mf55t"] Oct 08 14:39:52 crc kubenswrapper[4624]: E1008 14:39:52.276907 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-dfnzz ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-mf55t" podUID="0e062236-9287-44c5-bc5e-4c0524767c86" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.287208 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-v4bjq"] Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.288454 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.310843 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-v4bjq"] Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.338205 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e062236-9287-44c5-bc5e-4c0524767c86-combined-ca-bundle\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.338326 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8f4p\" (UniqueName: \"kubernetes.io/projected/d112c8ce-f2c4-43a1-9ae8-e155473d5831-kube-api-access-b8f4p\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.338415 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfnzz\" (UniqueName: \"kubernetes.io/projected/0e062236-9287-44c5-bc5e-4c0524767c86-kube-api-access-dfnzz\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.338492 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e062236-9287-44c5-bc5e-4c0524767c86-etc-swift\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.338561 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d112c8ce-f2c4-43a1-9ae8-e155473d5831-etc-swift\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.338655 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e062236-9287-44c5-bc5e-4c0524767c86-ring-data-devices\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.338737 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d112c8ce-f2c4-43a1-9ae8-e155473d5831-ring-data-devices\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.338841 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d112c8ce-f2c4-43a1-9ae8-e155473d5831-swiftconf\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.338930 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e062236-9287-44c5-bc5e-4c0524767c86-dispersionconf\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.339024 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d112c8ce-f2c4-43a1-9ae8-e155473d5831-scripts\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.339121 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d112c8ce-f2c4-43a1-9ae8-e155473d5831-dispersionconf\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.339201 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e062236-9287-44c5-bc5e-4c0524767c86-scripts\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.339326 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e062236-9287-44c5-bc5e-4c0524767c86-swiftconf\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.339440 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d112c8ce-f2c4-43a1-9ae8-e155473d5831-combined-ca-bundle\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.339841 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e062236-9287-44c5-bc5e-4c0524767c86-etc-swift\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.340481 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e062236-9287-44c5-bc5e-4c0524767c86-scripts\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.341027 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e062236-9287-44c5-bc5e-4c0524767c86-ring-data-devices\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.351134 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e062236-9287-44c5-bc5e-4c0524767c86-swiftconf\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.351506 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e062236-9287-44c5-bc5e-4c0524767c86-dispersionconf\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.374331 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e062236-9287-44c5-bc5e-4c0524767c86-combined-ca-bundle\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.396309 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfnzz\" (UniqueName: \"kubernetes.io/projected/0e062236-9287-44c5-bc5e-4c0524767c86-kube-api-access-dfnzz\") pod \"swift-ring-rebalance-mf55t\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.440893 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d112c8ce-f2c4-43a1-9ae8-e155473d5831-combined-ca-bundle\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.440968 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8f4p\" (UniqueName: \"kubernetes.io/projected/d112c8ce-f2c4-43a1-9ae8-e155473d5831-kube-api-access-b8f4p\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.441006 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d112c8ce-f2c4-43a1-9ae8-e155473d5831-etc-swift\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.441038 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d112c8ce-f2c4-43a1-9ae8-e155473d5831-ring-data-devices\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.441064 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d112c8ce-f2c4-43a1-9ae8-e155473d5831-swiftconf\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.441103 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d112c8ce-f2c4-43a1-9ae8-e155473d5831-scripts\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.441153 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d112c8ce-f2c4-43a1-9ae8-e155473d5831-dispersionconf\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.442088 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d112c8ce-f2c4-43a1-9ae8-e155473d5831-ring-data-devices\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.442141 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d112c8ce-f2c4-43a1-9ae8-e155473d5831-etc-swift\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.442904 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d112c8ce-f2c4-43a1-9ae8-e155473d5831-scripts\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.444495 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d112c8ce-f2c4-43a1-9ae8-e155473d5831-combined-ca-bundle\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.444575 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d112c8ce-f2c4-43a1-9ae8-e155473d5831-dispersionconf\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.444790 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d112c8ce-f2c4-43a1-9ae8-e155473d5831-swiftconf\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.461549 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8f4p\" (UniqueName: \"kubernetes.io/projected/d112c8ce-f2c4-43a1-9ae8-e155473d5831-kube-api-access-b8f4p\") pod \"swift-ring-rebalance-v4bjq\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.604593 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:39:52 crc kubenswrapper[4624]: I1008 14:39:52.992310 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.004459 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.050566 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e062236-9287-44c5-bc5e-4c0524767c86-ring-data-devices\") pod \"0e062236-9287-44c5-bc5e-4c0524767c86\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.050649 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e062236-9287-44c5-bc5e-4c0524767c86-scripts\") pod \"0e062236-9287-44c5-bc5e-4c0524767c86\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.050694 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfnzz\" (UniqueName: \"kubernetes.io/projected/0e062236-9287-44c5-bc5e-4c0524767c86-kube-api-access-dfnzz\") pod \"0e062236-9287-44c5-bc5e-4c0524767c86\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.050732 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e062236-9287-44c5-bc5e-4c0524767c86-dispersionconf\") pod \"0e062236-9287-44c5-bc5e-4c0524767c86\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.050767 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e062236-9287-44c5-bc5e-4c0524767c86-combined-ca-bundle\") pod \"0e062236-9287-44c5-bc5e-4c0524767c86\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.050837 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e062236-9287-44c5-bc5e-4c0524767c86-swiftconf\") pod \"0e062236-9287-44c5-bc5e-4c0524767c86\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.050895 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e062236-9287-44c5-bc5e-4c0524767c86-etc-swift\") pod \"0e062236-9287-44c5-bc5e-4c0524767c86\" (UID: \"0e062236-9287-44c5-bc5e-4c0524767c86\") " Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.051597 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e062236-9287-44c5-bc5e-4c0524767c86-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "0e062236-9287-44c5-bc5e-4c0524767c86" (UID: "0e062236-9287-44c5-bc5e-4c0524767c86"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.051662 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e062236-9287-44c5-bc5e-4c0524767c86-scripts" (OuterVolumeSpecName: "scripts") pod "0e062236-9287-44c5-bc5e-4c0524767c86" (UID: "0e062236-9287-44c5-bc5e-4c0524767c86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.051777 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e062236-9287-44c5-bc5e-4c0524767c86-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0e062236-9287-44c5-bc5e-4c0524767c86" (UID: "0e062236-9287-44c5-bc5e-4c0524767c86"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.052326 4624 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e062236-9287-44c5-bc5e-4c0524767c86-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.052351 4624 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e062236-9287-44c5-bc5e-4c0524767c86-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.052363 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e062236-9287-44c5-bc5e-4c0524767c86-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.055779 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e062236-9287-44c5-bc5e-4c0524767c86-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "0e062236-9287-44c5-bc5e-4c0524767c86" (UID: "0e062236-9287-44c5-bc5e-4c0524767c86"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.056425 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e062236-9287-44c5-bc5e-4c0524767c86-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "0e062236-9287-44c5-bc5e-4c0524767c86" (UID: "0e062236-9287-44c5-bc5e-4c0524767c86"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.056440 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e062236-9287-44c5-bc5e-4c0524767c86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e062236-9287-44c5-bc5e-4c0524767c86" (UID: "0e062236-9287-44c5-bc5e-4c0524767c86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.056432 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e062236-9287-44c5-bc5e-4c0524767c86-kube-api-access-dfnzz" (OuterVolumeSpecName: "kube-api-access-dfnzz") pod "0e062236-9287-44c5-bc5e-4c0524767c86" (UID: "0e062236-9287-44c5-bc5e-4c0524767c86"). InnerVolumeSpecName "kube-api-access-dfnzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.154181 4624 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e062236-9287-44c5-bc5e-4c0524767c86-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.154219 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfnzz\" (UniqueName: \"kubernetes.io/projected/0e062236-9287-44c5-bc5e-4c0524767c86-kube-api-access-dfnzz\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.154229 4624 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e062236-9287-44c5-bc5e-4c0524767c86-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.154238 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e062236-9287-44c5-bc5e-4c0524767c86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:53 crc kubenswrapper[4624]: I1008 14:39:53.998797 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mf55t" Oct 08 14:39:54 crc kubenswrapper[4624]: I1008 14:39:54.048925 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-mf55t"] Oct 08 14:39:54 crc kubenswrapper[4624]: I1008 14:39:54.056573 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-mf55t"] Oct 08 14:39:54 crc kubenswrapper[4624]: I1008 14:39:54.078486 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 08 14:39:54 crc kubenswrapper[4624]: I1008 14:39:54.079424 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 08 14:39:54 crc kubenswrapper[4624]: I1008 14:39:54.188854 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 08 14:39:54 crc kubenswrapper[4624]: I1008 14:39:54.188901 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 08 14:39:54 crc kubenswrapper[4624]: I1008 14:39:54.868535 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-v4bjq"] Oct 08 14:39:55 crc kubenswrapper[4624]: I1008 14:39:55.025343 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"85e0be3f-32a4-42c9-9fe5-f3bfca740477","Type":"ContainerStarted","Data":"b27bd4af2a7765025c0afa3a8399070c228e6a703e2632365b92266b88b36282"} Oct 08 14:39:55 crc kubenswrapper[4624]: I1008 14:39:55.040185 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2799503e-5a0b-4631-9946-f335b8446b53","Type":"ContainerStarted","Data":"6fb2cfc7597d43ebceb7e754c3b5cb0a91885f7c1ead2b3a9314cdab941b4e30"} Oct 08 14:39:55 crc kubenswrapper[4624]: I1008 14:39:55.048729 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v4bjq" event={"ID":"d112c8ce-f2c4-43a1-9ae8-e155473d5831","Type":"ContainerStarted","Data":"ca7a7e6fc0aea0baac69dc0c31c358db3e2a3bbd8fccd6bc85c1d3bd79b68c36"} Oct 08 14:39:55 crc kubenswrapper[4624]: I1008 14:39:55.077484 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.514419325 podStartE2EDuration="55.077461018s" podCreationTimestamp="2025-10-08 14:39:00 +0000 UTC" firstStartedPulling="2025-10-08 14:39:10.741893955 +0000 UTC m=+975.892829032" lastFinishedPulling="2025-10-08 14:39:54.304935648 +0000 UTC m=+1019.455870725" observedRunningTime="2025-10-08 14:39:55.052229812 +0000 UTC m=+1020.203164889" watchObservedRunningTime="2025-10-08 14:39:55.077461018 +0000 UTC m=+1020.228396095" Oct 08 14:39:55 crc kubenswrapper[4624]: I1008 14:39:55.098566 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=9.914937214 podStartE2EDuration="53.09854354s" podCreationTimestamp="2025-10-08 14:39:02 +0000 UTC" firstStartedPulling="2025-10-08 14:39:11.120467121 +0000 UTC m=+976.271402198" lastFinishedPulling="2025-10-08 14:39:54.304073447 +0000 UTC m=+1019.455008524" observedRunningTime="2025-10-08 14:39:55.09061022 +0000 UTC m=+1020.241545297" watchObservedRunningTime="2025-10-08 14:39:55.09854354 +0000 UTC m=+1020.249478617" Oct 08 14:39:55 crc kubenswrapper[4624]: I1008 14:39:55.147858 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:55 crc kubenswrapper[4624]: I1008 14:39:55.481420 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e062236-9287-44c5-bc5e-4c0524767c86" path="/var/lib/kubelet/pods/0e062236-9287-44c5-bc5e-4c0524767c86/volumes" Oct 08 14:39:55 crc kubenswrapper[4624]: I1008 14:39:55.484400 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:56 crc kubenswrapper[4624]: I1008 14:39:56.141473 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 08 14:39:56 crc kubenswrapper[4624]: I1008 14:39:56.193150 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 08 14:39:56 crc kubenswrapper[4624]: I1008 14:39:56.203813 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:39:56 crc kubenswrapper[4624]: E1008 14:39:56.205922 4624 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 08 14:39:56 crc kubenswrapper[4624]: E1008 14:39:56.205950 4624 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 08 14:39:56 crc kubenswrapper[4624]: E1008 14:39:56.206079 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift podName:a6d0a5f4-de63-4141-addf-72f5d787cb24 nodeName:}" failed. No retries permitted until 2025-10-08 14:40:04.205980845 +0000 UTC m=+1029.356915922 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift") pod "swift-storage-0" (UID: "a6d0a5f4-de63-4141-addf-72f5d787cb24") : configmap "swift-ring-files" not found Oct 08 14:39:56 crc kubenswrapper[4624]: I1008 14:39:56.473884 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:57 crc kubenswrapper[4624]: I1008 14:39:57.188354 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 08 14:39:57 crc kubenswrapper[4624]: I1008 14:39:57.504872 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" Oct 08 14:39:57 crc kubenswrapper[4624]: I1008 14:39:57.552162 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b857bcbc9-2f55q"] Oct 08 14:39:57 crc kubenswrapper[4624]: I1008 14:39:57.552446 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" podUID="8ed2842b-0391-43ac-b25d-2de3a1a994a2" containerName="dnsmasq-dns" containerID="cri-o://52f10e3fcc5b045c2c2c546cbc2e1f091a3e2a6c7ab9978f49aeb45daad867c7" gracePeriod=10 Oct 08 14:39:58 crc kubenswrapper[4624]: I1008 14:39:58.086577 4624 generic.go:334] "Generic (PLEG): container finished" podID="8ed2842b-0391-43ac-b25d-2de3a1a994a2" containerID="52f10e3fcc5b045c2c2c546cbc2e1f091a3e2a6c7ab9978f49aeb45daad867c7" exitCode=0 Oct 08 14:39:58 crc kubenswrapper[4624]: I1008 14:39:58.086899 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" event={"ID":"8ed2842b-0391-43ac-b25d-2de3a1a994a2","Type":"ContainerDied","Data":"52f10e3fcc5b045c2c2c546cbc2e1f091a3e2a6c7ab9978f49aeb45daad867c7"} Oct 08 14:39:58 crc kubenswrapper[4624]: I1008 14:39:58.191400 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:58 crc kubenswrapper[4624]: I1008 14:39:58.195018 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:58 crc kubenswrapper[4624]: I1008 14:39:58.505716 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" Oct 08 14:39:58 crc kubenswrapper[4624]: I1008 14:39:58.523584 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 08 14:39:58 crc kubenswrapper[4624]: I1008 14:39:58.647538 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xwgj\" (UniqueName: \"kubernetes.io/projected/8ed2842b-0391-43ac-b25d-2de3a1a994a2-kube-api-access-8xwgj\") pod \"8ed2842b-0391-43ac-b25d-2de3a1a994a2\" (UID: \"8ed2842b-0391-43ac-b25d-2de3a1a994a2\") " Oct 08 14:39:58 crc kubenswrapper[4624]: I1008 14:39:58.647885 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ed2842b-0391-43ac-b25d-2de3a1a994a2-dns-svc\") pod \"8ed2842b-0391-43ac-b25d-2de3a1a994a2\" (UID: \"8ed2842b-0391-43ac-b25d-2de3a1a994a2\") " Oct 08 14:39:58 crc kubenswrapper[4624]: I1008 14:39:58.647981 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed2842b-0391-43ac-b25d-2de3a1a994a2-config\") pod \"8ed2842b-0391-43ac-b25d-2de3a1a994a2\" (UID: \"8ed2842b-0391-43ac-b25d-2de3a1a994a2\") " Oct 08 14:39:58 crc kubenswrapper[4624]: I1008 14:39:58.655718 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ed2842b-0391-43ac-b25d-2de3a1a994a2-kube-api-access-8xwgj" (OuterVolumeSpecName: "kube-api-access-8xwgj") pod "8ed2842b-0391-43ac-b25d-2de3a1a994a2" (UID: "8ed2842b-0391-43ac-b25d-2de3a1a994a2"). InnerVolumeSpecName "kube-api-access-8xwgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:39:58 crc kubenswrapper[4624]: I1008 14:39:58.685370 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ed2842b-0391-43ac-b25d-2de3a1a994a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ed2842b-0391-43ac-b25d-2de3a1a994a2" (UID: "8ed2842b-0391-43ac-b25d-2de3a1a994a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:39:58 crc kubenswrapper[4624]: I1008 14:39:58.700013 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ed2842b-0391-43ac-b25d-2de3a1a994a2-config" (OuterVolumeSpecName: "config") pod "8ed2842b-0391-43ac-b25d-2de3a1a994a2" (UID: "8ed2842b-0391-43ac-b25d-2de3a1a994a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:39:58 crc kubenswrapper[4624]: I1008 14:39:58.749725 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xwgj\" (UniqueName: \"kubernetes.io/projected/8ed2842b-0391-43ac-b25d-2de3a1a994a2-kube-api-access-8xwgj\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:58 crc kubenswrapper[4624]: I1008 14:39:58.749766 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ed2842b-0391-43ac-b25d-2de3a1a994a2-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:58 crc kubenswrapper[4624]: I1008 14:39:58.749775 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed2842b-0391-43ac-b25d-2de3a1a994a2-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.096446 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v4bjq" event={"ID":"d112c8ce-f2c4-43a1-9ae8-e155473d5831","Type":"ContainerStarted","Data":"81b7c9d989c22df020f58ca68474a0a7ef92e9f0443afd51360bb11a10a37e11"} Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.098955 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.098974 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b857bcbc9-2f55q" event={"ID":"8ed2842b-0391-43ac-b25d-2de3a1a994a2","Type":"ContainerDied","Data":"0b1027ccd8b40f7e15d67058b1d1378ceb8aa45ecaec0b6dba9ee06e2ad81483"} Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.099054 4624 scope.go:117] "RemoveContainer" containerID="52f10e3fcc5b045c2c2c546cbc2e1f091a3e2a6c7ab9978f49aeb45daad867c7" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.116966 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-v4bjq" podStartSLOduration=3.755490556 podStartE2EDuration="7.116941976s" podCreationTimestamp="2025-10-08 14:39:52 +0000 UTC" firstStartedPulling="2025-10-08 14:39:54.882046641 +0000 UTC m=+1020.032981718" lastFinishedPulling="2025-10-08 14:39:58.243498061 +0000 UTC m=+1023.394433138" observedRunningTime="2025-10-08 14:39:59.115410917 +0000 UTC m=+1024.266345994" watchObservedRunningTime="2025-10-08 14:39:59.116941976 +0000 UTC m=+1024.267877073" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.133813 4624 scope.go:117] "RemoveContainer" containerID="0cafc6988dadbb737a180954c705ebacc22308cd36235a6bc2bde73b18cda2ff" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.141727 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b857bcbc9-2f55q"] Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.147976 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b857bcbc9-2f55q"] Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.167798 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.462259 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b9d48d7bf-tv8pp"] Oct 08 14:39:59 crc kubenswrapper[4624]: E1008 14:39:59.462608 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed2842b-0391-43ac-b25d-2de3a1a994a2" containerName="init" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.462621 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed2842b-0391-43ac-b25d-2de3a1a994a2" containerName="init" Oct 08 14:39:59 crc kubenswrapper[4624]: E1008 14:39:59.462656 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed2842b-0391-43ac-b25d-2de3a1a994a2" containerName="dnsmasq-dns" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.462663 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed2842b-0391-43ac-b25d-2de3a1a994a2" containerName="dnsmasq-dns" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.462835 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ed2842b-0391-43ac-b25d-2de3a1a994a2" containerName="dnsmasq-dns" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.463831 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9d48d7bf-tv8pp" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.475540 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.483399 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ed2842b-0391-43ac-b25d-2de3a1a994a2" path="/var/lib/kubelet/pods/8ed2842b-0391-43ac-b25d-2de3a1a994a2/volumes" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.498005 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b9d48d7bf-tv8pp"] Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.608709 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-bm82v"] Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.609680 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.612929 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.625159 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-bm82v"] Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.670686 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5pfn\" (UniqueName: \"kubernetes.io/projected/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-kube-api-access-p5pfn\") pod \"dnsmasq-dns-7b9d48d7bf-tv8pp\" (UID: \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\") " pod="openstack/dnsmasq-dns-7b9d48d7bf-tv8pp" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.670826 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9d48d7bf-tv8pp\" (UID: \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\") " pod="openstack/dnsmasq-dns-7b9d48d7bf-tv8pp" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.671087 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-dns-svc\") pod \"dnsmasq-dns-7b9d48d7bf-tv8pp\" (UID: \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\") " pod="openstack/dnsmasq-dns-7b9d48d7bf-tv8pp" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.671329 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-config\") pod \"dnsmasq-dns-7b9d48d7bf-tv8pp\" (UID: \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\") " pod="openstack/dnsmasq-dns-7b9d48d7bf-tv8pp" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.773481 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-dns-svc\") pod \"dnsmasq-dns-7b9d48d7bf-tv8pp\" (UID: \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\") " pod="openstack/dnsmasq-dns-7b9d48d7bf-tv8pp" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.773549 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0eec343f-e477-47d8-b651-2f5a2a944895-ovs-rundir\") pod \"ovn-controller-metrics-bm82v\" (UID: \"0eec343f-e477-47d8-b651-2f5a2a944895\") " pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.773572 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0eec343f-e477-47d8-b651-2f5a2a944895-ovn-rundir\") pod \"ovn-controller-metrics-bm82v\" (UID: \"0eec343f-e477-47d8-b651-2f5a2a944895\") " pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.773620 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssw2j\" (UniqueName: \"kubernetes.io/projected/0eec343f-e477-47d8-b651-2f5a2a944895-kube-api-access-ssw2j\") pod \"ovn-controller-metrics-bm82v\" (UID: \"0eec343f-e477-47d8-b651-2f5a2a944895\") " pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.773670 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-config\") pod \"dnsmasq-dns-7b9d48d7bf-tv8pp\" (UID: \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\") " pod="openstack/dnsmasq-dns-7b9d48d7bf-tv8pp" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.773706 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5pfn\" (UniqueName: \"kubernetes.io/projected/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-kube-api-access-p5pfn\") pod \"dnsmasq-dns-7b9d48d7bf-tv8pp\" (UID: \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\") " pod="openstack/dnsmasq-dns-7b9d48d7bf-tv8pp" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.773728 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eec343f-e477-47d8-b651-2f5a2a944895-config\") pod \"ovn-controller-metrics-bm82v\" (UID: \"0eec343f-e477-47d8-b651-2f5a2a944895\") " pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.773746 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9d48d7bf-tv8pp\" (UID: \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\") " pod="openstack/dnsmasq-dns-7b9d48d7bf-tv8pp" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.773761 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eec343f-e477-47d8-b651-2f5a2a944895-combined-ca-bundle\") pod \"ovn-controller-metrics-bm82v\" (UID: \"0eec343f-e477-47d8-b651-2f5a2a944895\") " pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.773786 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eec343f-e477-47d8-b651-2f5a2a944895-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bm82v\" (UID: \"0eec343f-e477-47d8-b651-2f5a2a944895\") " pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.774548 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-dns-svc\") pod \"dnsmasq-dns-7b9d48d7bf-tv8pp\" (UID: \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\") " pod="openstack/dnsmasq-dns-7b9d48d7bf-tv8pp" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.775132 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-config\") pod \"dnsmasq-dns-7b9d48d7bf-tv8pp\" (UID: \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\") " pod="openstack/dnsmasq-dns-7b9d48d7bf-tv8pp" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.775714 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9d48d7bf-tv8pp\" (UID: \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\") " pod="openstack/dnsmasq-dns-7b9d48d7bf-tv8pp" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.785452 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b9d48d7bf-tv8pp"] Oct 08 14:39:59 crc kubenswrapper[4624]: E1008 14:39:59.786168 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-p5pfn], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7b9d48d7bf-tv8pp" podUID="090d9972-af8e-4a86-b7c6-a3f04d7fa1dd" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.813081 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5pfn\" (UniqueName: \"kubernetes.io/projected/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-kube-api-access-p5pfn\") pod \"dnsmasq-dns-7b9d48d7bf-tv8pp\" (UID: \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\") " pod="openstack/dnsmasq-dns-7b9d48d7bf-tv8pp" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.820419 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64d796cf9-v2x9k"] Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.822440 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.835930 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.850786 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64d796cf9-v2x9k"] Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.877098 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssw2j\" (UniqueName: \"kubernetes.io/projected/0eec343f-e477-47d8-b651-2f5a2a944895-kube-api-access-ssw2j\") pod \"ovn-controller-metrics-bm82v\" (UID: \"0eec343f-e477-47d8-b651-2f5a2a944895\") " pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.879198 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eec343f-e477-47d8-b651-2f5a2a944895-config\") pod \"ovn-controller-metrics-bm82v\" (UID: \"0eec343f-e477-47d8-b651-2f5a2a944895\") " pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.879406 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eec343f-e477-47d8-b651-2f5a2a944895-combined-ca-bundle\") pod \"ovn-controller-metrics-bm82v\" (UID: \"0eec343f-e477-47d8-b651-2f5a2a944895\") " pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.879553 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eec343f-e477-47d8-b651-2f5a2a944895-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bm82v\" (UID: \"0eec343f-e477-47d8-b651-2f5a2a944895\") " pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.879729 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0eec343f-e477-47d8-b651-2f5a2a944895-ovs-rundir\") pod \"ovn-controller-metrics-bm82v\" (UID: \"0eec343f-e477-47d8-b651-2f5a2a944895\") " pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.879983 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0eec343f-e477-47d8-b651-2f5a2a944895-ovn-rundir\") pod \"ovn-controller-metrics-bm82v\" (UID: \"0eec343f-e477-47d8-b651-2f5a2a944895\") " pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.881014 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0eec343f-e477-47d8-b651-2f5a2a944895-ovn-rundir\") pod \"ovn-controller-metrics-bm82v\" (UID: \"0eec343f-e477-47d8-b651-2f5a2a944895\") " pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.883155 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0eec343f-e477-47d8-b651-2f5a2a944895-ovs-rundir\") pod \"ovn-controller-metrics-bm82v\" (UID: \"0eec343f-e477-47d8-b651-2f5a2a944895\") " pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.883586 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eec343f-e477-47d8-b651-2f5a2a944895-config\") pod \"ovn-controller-metrics-bm82v\" (UID: \"0eec343f-e477-47d8-b651-2f5a2a944895\") " pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.888579 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eec343f-e477-47d8-b651-2f5a2a944895-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bm82v\" (UID: \"0eec343f-e477-47d8-b651-2f5a2a944895\") " pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.905931 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eec343f-e477-47d8-b651-2f5a2a944895-combined-ca-bundle\") pod \"ovn-controller-metrics-bm82v\" (UID: \"0eec343f-e477-47d8-b651-2f5a2a944895\") " pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.921708 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssw2j\" (UniqueName: \"kubernetes.io/projected/0eec343f-e477-47d8-b651-2f5a2a944895-kube-api-access-ssw2j\") pod \"ovn-controller-metrics-bm82v\" (UID: \"0eec343f-e477-47d8-b651-2f5a2a944895\") " pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.928536 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bm82v" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.988075 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-dns-svc\") pod \"dnsmasq-dns-64d796cf9-v2x9k\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.988518 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-config\") pod \"dnsmasq-dns-64d796cf9-v2x9k\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.988711 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-ovsdbserver-nb\") pod \"dnsmasq-dns-64d796cf9-v2x9k\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.988998 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-ovsdbserver-sb\") pod \"dnsmasq-dns-64d796cf9-v2x9k\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:39:59 crc kubenswrapper[4624]: I1008 14:39:59.989119 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsg6g\" (UniqueName: \"kubernetes.io/projected/e26e9ed3-4063-4daf-bdfc-84c096ce4569-kube-api-access-tsg6g\") pod \"dnsmasq-dns-64d796cf9-v2x9k\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.091712 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-dns-svc\") pod \"dnsmasq-dns-64d796cf9-v2x9k\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.092212 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-dns-svc\") pod \"dnsmasq-dns-64d796cf9-v2x9k\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.092348 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-config\") pod \"dnsmasq-dns-64d796cf9-v2x9k\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.092380 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-ovsdbserver-nb\") pod \"dnsmasq-dns-64d796cf9-v2x9k\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.092509 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-ovsdbserver-sb\") pod \"dnsmasq-dns-64d796cf9-v2x9k\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.092542 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsg6g\" (UniqueName: \"kubernetes.io/projected/e26e9ed3-4063-4daf-bdfc-84c096ce4569-kube-api-access-tsg6g\") pod \"dnsmasq-dns-64d796cf9-v2x9k\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.093264 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-ovsdbserver-nb\") pod \"dnsmasq-dns-64d796cf9-v2x9k\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.093614 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-ovsdbserver-sb\") pod \"dnsmasq-dns-64d796cf9-v2x9k\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.093966 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-config\") pod \"dnsmasq-dns-64d796cf9-v2x9k\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.122287 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9d48d7bf-tv8pp" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.141007 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsg6g\" (UniqueName: \"kubernetes.io/projected/e26e9ed3-4063-4daf-bdfc-84c096ce4569-kube-api-access-tsg6g\") pod \"dnsmasq-dns-64d796cf9-v2x9k\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.144262 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9d48d7bf-tv8pp" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.181175 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.211378 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-bm82v"] Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.289745 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.295545 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-dns-svc\") pod \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\" (UID: \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\") " Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.295816 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5pfn\" (UniqueName: \"kubernetes.io/projected/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-kube-api-access-p5pfn\") pod \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\" (UID: \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\") " Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.296544 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-ovsdbserver-sb\") pod \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\" (UID: \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\") " Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.297107 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-config\") pod \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\" (UID: \"090d9972-af8e-4a86-b7c6-a3f04d7fa1dd\") " Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.296143 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "090d9972-af8e-4a86-b7c6-a3f04d7fa1dd" (UID: "090d9972-af8e-4a86-b7c6-a3f04d7fa1dd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.297262 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "090d9972-af8e-4a86-b7c6-a3f04d7fa1dd" (UID: "090d9972-af8e-4a86-b7c6-a3f04d7fa1dd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.297420 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-config" (OuterVolumeSpecName: "config") pod "090d9972-af8e-4a86-b7c6-a3f04d7fa1dd" (UID: "090d9972-af8e-4a86-b7c6-a3f04d7fa1dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.298506 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.298612 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.298832 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.301171 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-kube-api-access-p5pfn" (OuterVolumeSpecName: "kube-api-access-p5pfn") pod "090d9972-af8e-4a86-b7c6-a3f04d7fa1dd" (UID: "090d9972-af8e-4a86-b7c6-a3f04d7fa1dd"). InnerVolumeSpecName "kube-api-access-p5pfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.403858 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.411091 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5pfn\" (UniqueName: \"kubernetes.io/projected/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd-kube-api-access-p5pfn\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.736228 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64d796cf9-v2x9k"] Oct 08 14:40:00 crc kubenswrapper[4624]: W1008 14:40:00.750154 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode26e9ed3_4063_4daf_bdfc_84c096ce4569.slice/crio-9f12867599305419b5b6217bb0b1f448ce79b3078cb5dddabe91f4a39cc91937 WatchSource:0}: Error finding container 9f12867599305419b5b6217bb0b1f448ce79b3078cb5dddabe91f4a39cc91937: Status 404 returned error can't find the container with id 9f12867599305419b5b6217bb0b1f448ce79b3078cb5dddabe91f4a39cc91937 Oct 08 14:40:00 crc kubenswrapper[4624]: I1008 14:40:00.965812 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 08 14:40:01 crc kubenswrapper[4624]: I1008 14:40:01.131303 4624 generic.go:334] "Generic (PLEG): container finished" podID="e26e9ed3-4063-4daf-bdfc-84c096ce4569" containerID="990909d108f36fa77dd328bbbffcca7a978fc009e2a0fc6a375f5af5079b7911" exitCode=0 Oct 08 14:40:01 crc kubenswrapper[4624]: I1008 14:40:01.131778 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" event={"ID":"e26e9ed3-4063-4daf-bdfc-84c096ce4569","Type":"ContainerDied","Data":"990909d108f36fa77dd328bbbffcca7a978fc009e2a0fc6a375f5af5079b7911"} Oct 08 14:40:01 crc kubenswrapper[4624]: I1008 14:40:01.131950 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" event={"ID":"e26e9ed3-4063-4daf-bdfc-84c096ce4569","Type":"ContainerStarted","Data":"9f12867599305419b5b6217bb0b1f448ce79b3078cb5dddabe91f4a39cc91937"} Oct 08 14:40:01 crc kubenswrapper[4624]: I1008 14:40:01.134821 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bm82v" event={"ID":"0eec343f-e477-47d8-b651-2f5a2a944895","Type":"ContainerStarted","Data":"2b9c58857e8ea1b1e0c5f7d8857c825909e8f4f1afb78277df95cb6552848fc2"} Oct 08 14:40:01 crc kubenswrapper[4624]: I1008 14:40:01.135031 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bm82v" event={"ID":"0eec343f-e477-47d8-b651-2f5a2a944895","Type":"ContainerStarted","Data":"602c8ff6012ed113ade55c8b7749f647aadc7cd4926a1f22a75fbea52a48e674"} Oct 08 14:40:01 crc kubenswrapper[4624]: I1008 14:40:01.135161 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9d48d7bf-tv8pp" Oct 08 14:40:01 crc kubenswrapper[4624]: I1008 14:40:01.300086 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-bm82v" podStartSLOduration=2.300051623 podStartE2EDuration="2.300051623s" podCreationTimestamp="2025-10-08 14:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:40:01.299336175 +0000 UTC m=+1026.450271272" watchObservedRunningTime="2025-10-08 14:40:01.300051623 +0000 UTC m=+1026.450986720" Oct 08 14:40:01 crc kubenswrapper[4624]: I1008 14:40:01.408836 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:40:01 crc kubenswrapper[4624]: I1008 14:40:01.498311 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b9d48d7bf-tv8pp"] Oct 08 14:40:01 crc kubenswrapper[4624]: I1008 14:40:01.513992 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b9d48d7bf-tv8pp"] Oct 08 14:40:01 crc kubenswrapper[4624]: I1008 14:40:01.576284 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-wkgnr"] Oct 08 14:40:01 crc kubenswrapper[4624]: I1008 14:40:01.578185 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wkgnr" Oct 08 14:40:01 crc kubenswrapper[4624]: I1008 14:40:01.693054 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtskq\" (UniqueName: \"kubernetes.io/projected/ae7533e2-e271-4dcf-94c1-6bf63f0c4c54-kube-api-access-gtskq\") pod \"cinder-db-create-wkgnr\" (UID: \"ae7533e2-e271-4dcf-94c1-6bf63f0c4c54\") " pod="openstack/cinder-db-create-wkgnr" Oct 08 14:40:01 crc kubenswrapper[4624]: I1008 14:40:01.717511 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wkgnr"] Oct 08 14:40:01 crc kubenswrapper[4624]: I1008 14:40:01.791316 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 08 14:40:01 crc kubenswrapper[4624]: I1008 14:40:01.796290 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtskq\" (UniqueName: \"kubernetes.io/projected/ae7533e2-e271-4dcf-94c1-6bf63f0c4c54-kube-api-access-gtskq\") pod \"cinder-db-create-wkgnr\" (UID: \"ae7533e2-e271-4dcf-94c1-6bf63f0c4c54\") " pod="openstack/cinder-db-create-wkgnr" Oct 08 14:40:01 crc kubenswrapper[4624]: I1008 14:40:01.900392 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtskq\" (UniqueName: \"kubernetes.io/projected/ae7533e2-e271-4dcf-94c1-6bf63f0c4c54-kube-api-access-gtskq\") pod \"cinder-db-create-wkgnr\" (UID: \"ae7533e2-e271-4dcf-94c1-6bf63f0c4c54\") " pod="openstack/cinder-db-create-wkgnr" Oct 08 14:40:01 crc kubenswrapper[4624]: I1008 14:40:01.907077 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wkgnr" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.015037 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-zvn72"] Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.016316 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zvn72" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.042436 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-zvn72"] Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.111087 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-4jnw5"] Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.116887 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-4jnw5" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.118138 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q9nw\" (UniqueName: \"kubernetes.io/projected/65bca6e6-25cd-4879-a092-b9e24b2aa1e3-kube-api-access-5q9nw\") pod \"barbican-db-create-zvn72\" (UID: \"65bca6e6-25cd-4879-a092-b9e24b2aa1e3\") " pod="openstack/barbican-db-create-zvn72" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.164269 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" event={"ID":"e26e9ed3-4063-4daf-bdfc-84c096ce4569","Type":"ContainerStarted","Data":"74c14307dd2c5bea336ff0746f37b2d818f97a2b87a521a88f58f2015a34829a"} Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.164326 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.210110 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" podStartSLOduration=3.210089231 podStartE2EDuration="3.210089231s" podCreationTimestamp="2025-10-08 14:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:40:02.207936676 +0000 UTC m=+1027.358871753" watchObservedRunningTime="2025-10-08 14:40:02.210089231 +0000 UTC m=+1027.361024318" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.220187 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q9nw\" (UniqueName: \"kubernetes.io/projected/65bca6e6-25cd-4879-a092-b9e24b2aa1e3-kube-api-access-5q9nw\") pod \"barbican-db-create-zvn72\" (UID: \"65bca6e6-25cd-4879-a092-b9e24b2aa1e3\") " pod="openstack/barbican-db-create-zvn72" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.220258 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwwzx\" (UniqueName: \"kubernetes.io/projected/0e56418a-dbf3-46b1-9f37-c004b9abf454-kube-api-access-pwwzx\") pod \"heat-db-create-4jnw5\" (UID: \"0e56418a-dbf3-46b1-9f37-c004b9abf454\") " pod="openstack/heat-db-create-4jnw5" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.235716 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-4jnw5"] Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.268544 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q9nw\" (UniqueName: \"kubernetes.io/projected/65bca6e6-25cd-4879-a092-b9e24b2aa1e3-kube-api-access-5q9nw\") pod \"barbican-db-create-zvn72\" (UID: \"65bca6e6-25cd-4879-a092-b9e24b2aa1e3\") " pod="openstack/barbican-db-create-zvn72" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.326442 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwwzx\" (UniqueName: \"kubernetes.io/projected/0e56418a-dbf3-46b1-9f37-c004b9abf454-kube-api-access-pwwzx\") pod \"heat-db-create-4jnw5\" (UID: \"0e56418a-dbf3-46b1-9f37-c004b9abf454\") " pod="openstack/heat-db-create-4jnw5" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.339399 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zvn72" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.371275 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwwzx\" (UniqueName: \"kubernetes.io/projected/0e56418a-dbf3-46b1-9f37-c004b9abf454-kube-api-access-pwwzx\") pod \"heat-db-create-4jnw5\" (UID: \"0e56418a-dbf3-46b1-9f37-c004b9abf454\") " pod="openstack/heat-db-create-4jnw5" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.446569 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-4jnw5" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.453198 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-6xs6m"] Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.454440 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6xs6m" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.472286 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6xs6m"] Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.532579 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm4wx\" (UniqueName: \"kubernetes.io/projected/a449627c-78bf-4c07-bda1-f9f3005d4782-kube-api-access-sm4wx\") pod \"neutron-db-create-6xs6m\" (UID: \"a449627c-78bf-4c07-bda1-f9f3005d4782\") " pod="openstack/neutron-db-create-6xs6m" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.635275 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm4wx\" (UniqueName: \"kubernetes.io/projected/a449627c-78bf-4c07-bda1-f9f3005d4782-kube-api-access-sm4wx\") pod \"neutron-db-create-6xs6m\" (UID: \"a449627c-78bf-4c07-bda1-f9f3005d4782\") " pod="openstack/neutron-db-create-6xs6m" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.661809 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.663503 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.675061 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.675479 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.681238 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-nbbk2" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.681682 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.683447 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm4wx\" (UniqueName: \"kubernetes.io/projected/a449627c-78bf-4c07-bda1-f9f3005d4782-kube-api-access-sm4wx\") pod \"neutron-db-create-6xs6m\" (UID: \"a449627c-78bf-4c07-bda1-f9f3005d4782\") " pod="openstack/neutron-db-create-6xs6m" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.714010 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.737056 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/226458e6-33a0-4123-8aaf-b3950a30d1c9-config\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.737377 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/226458e6-33a0-4123-8aaf-b3950a30d1c9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.737501 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226458e6-33a0-4123-8aaf-b3950a30d1c9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.737561 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/226458e6-33a0-4123-8aaf-b3950a30d1c9-scripts\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.737595 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/226458e6-33a0-4123-8aaf-b3950a30d1c9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.737666 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/226458e6-33a0-4123-8aaf-b3950a30d1c9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.737708 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78vdb\" (UniqueName: \"kubernetes.io/projected/226458e6-33a0-4123-8aaf-b3950a30d1c9-kube-api-access-78vdb\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.797882 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6xs6m" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.832628 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wkgnr"] Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.840394 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/226458e6-33a0-4123-8aaf-b3950a30d1c9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.840467 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/226458e6-33a0-4123-8aaf-b3950a30d1c9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.840495 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78vdb\" (UniqueName: \"kubernetes.io/projected/226458e6-33a0-4123-8aaf-b3950a30d1c9-kube-api-access-78vdb\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.840538 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/226458e6-33a0-4123-8aaf-b3950a30d1c9-config\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.840581 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/226458e6-33a0-4123-8aaf-b3950a30d1c9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.840777 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226458e6-33a0-4123-8aaf-b3950a30d1c9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.840863 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/226458e6-33a0-4123-8aaf-b3950a30d1c9-scripts\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.842139 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/226458e6-33a0-4123-8aaf-b3950a30d1c9-scripts\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.846667 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/226458e6-33a0-4123-8aaf-b3950a30d1c9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.847865 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/226458e6-33a0-4123-8aaf-b3950a30d1c9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.861779 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/226458e6-33a0-4123-8aaf-b3950a30d1c9-config\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.877455 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226458e6-33a0-4123-8aaf-b3950a30d1c9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.878126 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/226458e6-33a0-4123-8aaf-b3950a30d1c9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.895387 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78vdb\" (UniqueName: \"kubernetes.io/projected/226458e6-33a0-4123-8aaf-b3950a30d1c9-kube-api-access-78vdb\") pod \"ovn-northd-0\" (UID: \"226458e6-33a0-4123-8aaf-b3950a30d1c9\") " pod="openstack/ovn-northd-0" Oct 08 14:40:02 crc kubenswrapper[4624]: I1008 14:40:02.990284 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 08 14:40:03 crc kubenswrapper[4624]: I1008 14:40:03.214046 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wkgnr" event={"ID":"ae7533e2-e271-4dcf-94c1-6bf63f0c4c54","Type":"ContainerStarted","Data":"5972ddb9633aae76b68bf6e18b4cf189ee2e8b5d22f228f9231492c1672b62a6"} Oct 08 14:40:03 crc kubenswrapper[4624]: I1008 14:40:03.214326 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wkgnr" event={"ID":"ae7533e2-e271-4dcf-94c1-6bf63f0c4c54","Type":"ContainerStarted","Data":"b9403a222e12d0afdfda371b919a9d78cfbadcf28182cadde49fc0626d0be060"} Oct 08 14:40:03 crc kubenswrapper[4624]: I1008 14:40:03.254314 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-wkgnr" podStartSLOduration=2.254295781 podStartE2EDuration="2.254295781s" podCreationTimestamp="2025-10-08 14:40:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:40:03.252177178 +0000 UTC m=+1028.403112255" watchObservedRunningTime="2025-10-08 14:40:03.254295781 +0000 UTC m=+1028.405230858" Oct 08 14:40:03 crc kubenswrapper[4624]: I1008 14:40:03.278789 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-zvn72"] Oct 08 14:40:03 crc kubenswrapper[4624]: I1008 14:40:03.373414 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-4jnw5"] Oct 08 14:40:03 crc kubenswrapper[4624]: W1008 14:40:03.423961 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e56418a_dbf3_46b1_9f37_c004b9abf454.slice/crio-3f7931ae0762a26c62d10024b0f711559381c9a0dfb98f886b10d6a7efe1bf8b WatchSource:0}: Error finding container 3f7931ae0762a26c62d10024b0f711559381c9a0dfb98f886b10d6a7efe1bf8b: Status 404 returned error can't find the container with id 3f7931ae0762a26c62d10024b0f711559381c9a0dfb98f886b10d6a7efe1bf8b Oct 08 14:40:03 crc kubenswrapper[4624]: I1008 14:40:03.446221 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6xs6m"] Oct 08 14:40:03 crc kubenswrapper[4624]: W1008 14:40:03.454517 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda449627c_78bf_4c07_bda1_f9f3005d4782.slice/crio-41b0fe2cd5d5376deb317de316e5f054d48f9f5a0b2e1f2a7af83f05669af8a8 WatchSource:0}: Error finding container 41b0fe2cd5d5376deb317de316e5f054d48f9f5a0b2e1f2a7af83f05669af8a8: Status 404 returned error can't find the container with id 41b0fe2cd5d5376deb317de316e5f054d48f9f5a0b2e1f2a7af83f05669af8a8 Oct 08 14:40:03 crc kubenswrapper[4624]: I1008 14:40:03.487940 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="090d9972-af8e-4a86-b7c6-a3f04d7fa1dd" path="/var/lib/kubelet/pods/090d9972-af8e-4a86-b7c6-a3f04d7fa1dd/volumes" Oct 08 14:40:03 crc kubenswrapper[4624]: I1008 14:40:03.782138 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.121604 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-fsb77"] Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.123549 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fsb77" Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.142900 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fsb77"] Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.165736 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46ksj\" (UniqueName: \"kubernetes.io/projected/2a5747e2-432a-4b1e-9a58-f65736759415-kube-api-access-46ksj\") pod \"keystone-db-create-fsb77\" (UID: \"2a5747e2-432a-4b1e-9a58-f65736759415\") " pod="openstack/keystone-db-create-fsb77" Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.241585 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"226458e6-33a0-4123-8aaf-b3950a30d1c9","Type":"ContainerStarted","Data":"3fc7e5111df9840fd0cf29e8d8f8a1f24fc9cb3e56f60fd3e50e7bdb2f83008a"} Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.265995 4624 generic.go:334] "Generic (PLEG): container finished" podID="65bca6e6-25cd-4879-a092-b9e24b2aa1e3" containerID="50a930ad5cabc8683b9e105db383d823b914f559a1e5f9ff390d060faba970b4" exitCode=0 Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.266781 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zvn72" event={"ID":"65bca6e6-25cd-4879-a092-b9e24b2aa1e3","Type":"ContainerDied","Data":"50a930ad5cabc8683b9e105db383d823b914f559a1e5f9ff390d060faba970b4"} Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.266852 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zvn72" event={"ID":"65bca6e6-25cd-4879-a092-b9e24b2aa1e3","Type":"ContainerStarted","Data":"261e0511a19326e96f75860bfb897dcab3f32f3fb24926d42f4a99f7b2447933"} Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.266975 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46ksj\" (UniqueName: \"kubernetes.io/projected/2a5747e2-432a-4b1e-9a58-f65736759415-kube-api-access-46ksj\") pod \"keystone-db-create-fsb77\" (UID: \"2a5747e2-432a-4b1e-9a58-f65736759415\") " pod="openstack/keystone-db-create-fsb77" Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.267047 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:40:04 crc kubenswrapper[4624]: E1008 14:40:04.267234 4624 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 08 14:40:04 crc kubenswrapper[4624]: E1008 14:40:04.267249 4624 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 08 14:40:04 crc kubenswrapper[4624]: E1008 14:40:04.267287 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift podName:a6d0a5f4-de63-4141-addf-72f5d787cb24 nodeName:}" failed. No retries permitted until 2025-10-08 14:40:20.267275304 +0000 UTC m=+1045.418210381 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift") pod "swift-storage-0" (UID: "a6d0a5f4-de63-4141-addf-72f5d787cb24") : configmap "swift-ring-files" not found Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.280938 4624 generic.go:334] "Generic (PLEG): container finished" podID="0e56418a-dbf3-46b1-9f37-c004b9abf454" containerID="39f012af7f8806e54388f8939710bdd3da801746c0b5bfe5c02626e0b7a1276b" exitCode=0 Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.281127 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-4jnw5" event={"ID":"0e56418a-dbf3-46b1-9f37-c004b9abf454","Type":"ContainerDied","Data":"39f012af7f8806e54388f8939710bdd3da801746c0b5bfe5c02626e0b7a1276b"} Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.281151 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-4jnw5" event={"ID":"0e56418a-dbf3-46b1-9f37-c004b9abf454","Type":"ContainerStarted","Data":"3f7931ae0762a26c62d10024b0f711559381c9a0dfb98f886b10d6a7efe1bf8b"} Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.290757 4624 generic.go:334] "Generic (PLEG): container finished" podID="a449627c-78bf-4c07-bda1-f9f3005d4782" containerID="f5dda5876a71f2a0f49ab717a44bdd894ae30d85d1693a901af8b679ef7da61b" exitCode=0 Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.290934 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6xs6m" event={"ID":"a449627c-78bf-4c07-bda1-f9f3005d4782","Type":"ContainerDied","Data":"f5dda5876a71f2a0f49ab717a44bdd894ae30d85d1693a901af8b679ef7da61b"} Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.290959 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6xs6m" event={"ID":"a449627c-78bf-4c07-bda1-f9f3005d4782","Type":"ContainerStarted","Data":"41b0fe2cd5d5376deb317de316e5f054d48f9f5a0b2e1f2a7af83f05669af8a8"} Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.296066 4624 generic.go:334] "Generic (PLEG): container finished" podID="ae7533e2-e271-4dcf-94c1-6bf63f0c4c54" containerID="5972ddb9633aae76b68bf6e18b4cf189ee2e8b5d22f228f9231492c1672b62a6" exitCode=0 Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.296327 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wkgnr" event={"ID":"ae7533e2-e271-4dcf-94c1-6bf63f0c4c54","Type":"ContainerDied","Data":"5972ddb9633aae76b68bf6e18b4cf189ee2e8b5d22f228f9231492c1672b62a6"} Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.316468 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46ksj\" (UniqueName: \"kubernetes.io/projected/2a5747e2-432a-4b1e-9a58-f65736759415-kube-api-access-46ksj\") pod \"keystone-db-create-fsb77\" (UID: \"2a5747e2-432a-4b1e-9a58-f65736759415\") " pod="openstack/keystone-db-create-fsb77" Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.442604 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-mkdvw"] Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.443842 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mkdvw" Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.449607 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fsb77" Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.458074 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-mkdvw"] Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.571506 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp46q\" (UniqueName: \"kubernetes.io/projected/8c6409de-1f13-4742-8956-3a1aa3bb60c7-kube-api-access-zp46q\") pod \"placement-db-create-mkdvw\" (UID: \"8c6409de-1f13-4742-8956-3a1aa3bb60c7\") " pod="openstack/placement-db-create-mkdvw" Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.673341 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp46q\" (UniqueName: \"kubernetes.io/projected/8c6409de-1f13-4742-8956-3a1aa3bb60c7-kube-api-access-zp46q\") pod \"placement-db-create-mkdvw\" (UID: \"8c6409de-1f13-4742-8956-3a1aa3bb60c7\") " pod="openstack/placement-db-create-mkdvw" Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.694219 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp46q\" (UniqueName: \"kubernetes.io/projected/8c6409de-1f13-4742-8956-3a1aa3bb60c7-kube-api-access-zp46q\") pod \"placement-db-create-mkdvw\" (UID: \"8c6409de-1f13-4742-8956-3a1aa3bb60c7\") " pod="openstack/placement-db-create-mkdvw" Oct 08 14:40:04 crc kubenswrapper[4624]: I1008 14:40:04.769843 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mkdvw" Oct 08 14:40:05 crc kubenswrapper[4624]: W1008 14:40:05.248816 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a5747e2_432a_4b1e_9a58_f65736759415.slice/crio-2976284aba79e6b9c9dcdba3ee26c4f4c3d08fdc2407c79898b3a03994556945 WatchSource:0}: Error finding container 2976284aba79e6b9c9dcdba3ee26c4f4c3d08fdc2407c79898b3a03994556945: Status 404 returned error can't find the container with id 2976284aba79e6b9c9dcdba3ee26c4f4c3d08fdc2407c79898b3a03994556945 Oct 08 14:40:05 crc kubenswrapper[4624]: I1008 14:40:05.261654 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fsb77"] Oct 08 14:40:05 crc kubenswrapper[4624]: I1008 14:40:05.312504 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"226458e6-33a0-4123-8aaf-b3950a30d1c9","Type":"ContainerStarted","Data":"9cefc18a96aca6a8d2f9b5bef689a5e3737ada67775dba92a3cdfa8a70bbc0ce"} Oct 08 14:40:05 crc kubenswrapper[4624]: I1008 14:40:05.312551 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"226458e6-33a0-4123-8aaf-b3950a30d1c9","Type":"ContainerStarted","Data":"ecb2799624fe0192a776ced5395d3ebe4b135e6a0bef43553f5fe4aa6acdc5a3"} Oct 08 14:40:05 crc kubenswrapper[4624]: I1008 14:40:05.313682 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 08 14:40:05 crc kubenswrapper[4624]: I1008 14:40:05.317851 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fsb77" event={"ID":"2a5747e2-432a-4b1e-9a58-f65736759415","Type":"ContainerStarted","Data":"2976284aba79e6b9c9dcdba3ee26c4f4c3d08fdc2407c79898b3a03994556945"} Oct 08 14:40:05 crc kubenswrapper[4624]: I1008 14:40:05.354602 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.428581162 podStartE2EDuration="3.35457726s" podCreationTimestamp="2025-10-08 14:40:02 +0000 UTC" firstStartedPulling="2025-10-08 14:40:03.778592432 +0000 UTC m=+1028.929527509" lastFinishedPulling="2025-10-08 14:40:04.70458853 +0000 UTC m=+1029.855523607" observedRunningTime="2025-10-08 14:40:05.339840629 +0000 UTC m=+1030.490775726" watchObservedRunningTime="2025-10-08 14:40:05.35457726 +0000 UTC m=+1030.505512337" Oct 08 14:40:05 crc kubenswrapper[4624]: I1008 14:40:05.398908 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-mkdvw"] Oct 08 14:40:05 crc kubenswrapper[4624]: W1008 14:40:05.411096 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c6409de_1f13_4742_8956_3a1aa3bb60c7.slice/crio-d12c3c5c85bd950d580e8337b0e5a2a54c7671fc56d8f514a44d52a42cc18ffc WatchSource:0}: Error finding container d12c3c5c85bd950d580e8337b0e5a2a54c7671fc56d8f514a44d52a42cc18ffc: Status 404 returned error can't find the container with id d12c3c5c85bd950d580e8337b0e5a2a54c7671fc56d8f514a44d52a42cc18ffc Oct 08 14:40:05 crc kubenswrapper[4624]: I1008 14:40:05.703348 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-4jnw5" Oct 08 14:40:05 crc kubenswrapper[4624]: I1008 14:40:05.802608 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwwzx\" (UniqueName: \"kubernetes.io/projected/0e56418a-dbf3-46b1-9f37-c004b9abf454-kube-api-access-pwwzx\") pod \"0e56418a-dbf3-46b1-9f37-c004b9abf454\" (UID: \"0e56418a-dbf3-46b1-9f37-c004b9abf454\") " Oct 08 14:40:05 crc kubenswrapper[4624]: I1008 14:40:05.809193 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e56418a-dbf3-46b1-9f37-c004b9abf454-kube-api-access-pwwzx" (OuterVolumeSpecName: "kube-api-access-pwwzx") pod "0e56418a-dbf3-46b1-9f37-c004b9abf454" (UID: "0e56418a-dbf3-46b1-9f37-c004b9abf454"). InnerVolumeSpecName "kube-api-access-pwwzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:05 crc kubenswrapper[4624]: I1008 14:40:05.904793 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwwzx\" (UniqueName: \"kubernetes.io/projected/0e56418a-dbf3-46b1-9f37-c004b9abf454-kube-api-access-pwwzx\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.019911 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zvn72" Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.054350 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6xs6m" Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.063924 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wkgnr" Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.107523 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm4wx\" (UniqueName: \"kubernetes.io/projected/a449627c-78bf-4c07-bda1-f9f3005d4782-kube-api-access-sm4wx\") pod \"a449627c-78bf-4c07-bda1-f9f3005d4782\" (UID: \"a449627c-78bf-4c07-bda1-f9f3005d4782\") " Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.107754 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q9nw\" (UniqueName: \"kubernetes.io/projected/65bca6e6-25cd-4879-a092-b9e24b2aa1e3-kube-api-access-5q9nw\") pod \"65bca6e6-25cd-4879-a092-b9e24b2aa1e3\" (UID: \"65bca6e6-25cd-4879-a092-b9e24b2aa1e3\") " Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.114367 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65bca6e6-25cd-4879-a092-b9e24b2aa1e3-kube-api-access-5q9nw" (OuterVolumeSpecName: "kube-api-access-5q9nw") pod "65bca6e6-25cd-4879-a092-b9e24b2aa1e3" (UID: "65bca6e6-25cd-4879-a092-b9e24b2aa1e3"). InnerVolumeSpecName "kube-api-access-5q9nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.114438 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a449627c-78bf-4c07-bda1-f9f3005d4782-kube-api-access-sm4wx" (OuterVolumeSpecName: "kube-api-access-sm4wx") pod "a449627c-78bf-4c07-bda1-f9f3005d4782" (UID: "a449627c-78bf-4c07-bda1-f9f3005d4782"). InnerVolumeSpecName "kube-api-access-sm4wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.209429 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtskq\" (UniqueName: \"kubernetes.io/projected/ae7533e2-e271-4dcf-94c1-6bf63f0c4c54-kube-api-access-gtskq\") pod \"ae7533e2-e271-4dcf-94c1-6bf63f0c4c54\" (UID: \"ae7533e2-e271-4dcf-94c1-6bf63f0c4c54\") " Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.210306 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm4wx\" (UniqueName: \"kubernetes.io/projected/a449627c-78bf-4c07-bda1-f9f3005d4782-kube-api-access-sm4wx\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.210326 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q9nw\" (UniqueName: \"kubernetes.io/projected/65bca6e6-25cd-4879-a092-b9e24b2aa1e3-kube-api-access-5q9nw\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.214288 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae7533e2-e271-4dcf-94c1-6bf63f0c4c54-kube-api-access-gtskq" (OuterVolumeSpecName: "kube-api-access-gtskq") pod "ae7533e2-e271-4dcf-94c1-6bf63f0c4c54" (UID: "ae7533e2-e271-4dcf-94c1-6bf63f0c4c54"). InnerVolumeSpecName "kube-api-access-gtskq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.311608 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtskq\" (UniqueName: \"kubernetes.io/projected/ae7533e2-e271-4dcf-94c1-6bf63f0c4c54-kube-api-access-gtskq\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.326329 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zvn72" Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.326332 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zvn72" event={"ID":"65bca6e6-25cd-4879-a092-b9e24b2aa1e3","Type":"ContainerDied","Data":"261e0511a19326e96f75860bfb897dcab3f32f3fb24926d42f4a99f7b2447933"} Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.326371 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="261e0511a19326e96f75860bfb897dcab3f32f3fb24926d42f4a99f7b2447933" Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.327818 4624 generic.go:334] "Generic (PLEG): container finished" podID="8c6409de-1f13-4742-8956-3a1aa3bb60c7" containerID="f1fe09702bb1166a4ad7e33b7ae1d61ad4fa04fa8294451c7912505dbf61d33c" exitCode=0 Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.327888 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mkdvw" event={"ID":"8c6409de-1f13-4742-8956-3a1aa3bb60c7","Type":"ContainerDied","Data":"f1fe09702bb1166a4ad7e33b7ae1d61ad4fa04fa8294451c7912505dbf61d33c"} Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.327912 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mkdvw" event={"ID":"8c6409de-1f13-4742-8956-3a1aa3bb60c7","Type":"ContainerStarted","Data":"d12c3c5c85bd950d580e8337b0e5a2a54c7671fc56d8f514a44d52a42cc18ffc"} Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.329369 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-4jnw5" event={"ID":"0e56418a-dbf3-46b1-9f37-c004b9abf454","Type":"ContainerDied","Data":"3f7931ae0762a26c62d10024b0f711559381c9a0dfb98f886b10d6a7efe1bf8b"} Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.329394 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f7931ae0762a26c62d10024b0f711559381c9a0dfb98f886b10d6a7efe1bf8b" Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.329394 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-4jnw5" Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.330693 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6xs6m" event={"ID":"a449627c-78bf-4c07-bda1-f9f3005d4782","Type":"ContainerDied","Data":"41b0fe2cd5d5376deb317de316e5f054d48f9f5a0b2e1f2a7af83f05669af8a8"} Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.330725 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41b0fe2cd5d5376deb317de316e5f054d48f9f5a0b2e1f2a7af83f05669af8a8" Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.330799 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6xs6m" Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.335125 4624 generic.go:334] "Generic (PLEG): container finished" podID="2a5747e2-432a-4b1e-9a58-f65736759415" containerID="18af941d3f8852df34b84acbb562a097ee5d40126ad0289aad7d62bb72069e1f" exitCode=0 Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.335172 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fsb77" event={"ID":"2a5747e2-432a-4b1e-9a58-f65736759415","Type":"ContainerDied","Data":"18af941d3f8852df34b84acbb562a097ee5d40126ad0289aad7d62bb72069e1f"} Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.337071 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wkgnr" Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.337349 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wkgnr" event={"ID":"ae7533e2-e271-4dcf-94c1-6bf63f0c4c54","Type":"ContainerDied","Data":"b9403a222e12d0afdfda371b919a9d78cfbadcf28182cadde49fc0626d0be060"} Oct 08 14:40:06 crc kubenswrapper[4624]: I1008 14:40:06.337383 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9403a222e12d0afdfda371b919a9d78cfbadcf28182cadde49fc0626d0be060" Oct 08 14:40:07 crc kubenswrapper[4624]: I1008 14:40:07.769752 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mkdvw" Oct 08 14:40:07 crc kubenswrapper[4624]: I1008 14:40:07.778422 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fsb77" Oct 08 14:40:07 crc kubenswrapper[4624]: I1008 14:40:07.838804 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp46q\" (UniqueName: \"kubernetes.io/projected/8c6409de-1f13-4742-8956-3a1aa3bb60c7-kube-api-access-zp46q\") pod \"8c6409de-1f13-4742-8956-3a1aa3bb60c7\" (UID: \"8c6409de-1f13-4742-8956-3a1aa3bb60c7\") " Oct 08 14:40:07 crc kubenswrapper[4624]: I1008 14:40:07.838870 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46ksj\" (UniqueName: \"kubernetes.io/projected/2a5747e2-432a-4b1e-9a58-f65736759415-kube-api-access-46ksj\") pod \"2a5747e2-432a-4b1e-9a58-f65736759415\" (UID: \"2a5747e2-432a-4b1e-9a58-f65736759415\") " Oct 08 14:40:07 crc kubenswrapper[4624]: I1008 14:40:07.847514 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5747e2-432a-4b1e-9a58-f65736759415-kube-api-access-46ksj" (OuterVolumeSpecName: "kube-api-access-46ksj") pod "2a5747e2-432a-4b1e-9a58-f65736759415" (UID: "2a5747e2-432a-4b1e-9a58-f65736759415"). InnerVolumeSpecName "kube-api-access-46ksj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:07 crc kubenswrapper[4624]: I1008 14:40:07.847574 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c6409de-1f13-4742-8956-3a1aa3bb60c7-kube-api-access-zp46q" (OuterVolumeSpecName: "kube-api-access-zp46q") pod "8c6409de-1f13-4742-8956-3a1aa3bb60c7" (UID: "8c6409de-1f13-4742-8956-3a1aa3bb60c7"). InnerVolumeSpecName "kube-api-access-zp46q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:07 crc kubenswrapper[4624]: I1008 14:40:07.940755 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp46q\" (UniqueName: \"kubernetes.io/projected/8c6409de-1f13-4742-8956-3a1aa3bb60c7-kube-api-access-zp46q\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:07 crc kubenswrapper[4624]: I1008 14:40:07.940786 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46ksj\" (UniqueName: \"kubernetes.io/projected/2a5747e2-432a-4b1e-9a58-f65736759415-kube-api-access-46ksj\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:08 crc kubenswrapper[4624]: I1008 14:40:08.357281 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mkdvw" event={"ID":"8c6409de-1f13-4742-8956-3a1aa3bb60c7","Type":"ContainerDied","Data":"d12c3c5c85bd950d580e8337b0e5a2a54c7671fc56d8f514a44d52a42cc18ffc"} Oct 08 14:40:08 crc kubenswrapper[4624]: I1008 14:40:08.357384 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d12c3c5c85bd950d580e8337b0e5a2a54c7671fc56d8f514a44d52a42cc18ffc" Oct 08 14:40:08 crc kubenswrapper[4624]: I1008 14:40:08.357300 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mkdvw" Oct 08 14:40:08 crc kubenswrapper[4624]: I1008 14:40:08.360700 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fsb77" event={"ID":"2a5747e2-432a-4b1e-9a58-f65736759415","Type":"ContainerDied","Data":"2976284aba79e6b9c9dcdba3ee26c4f4c3d08fdc2407c79898b3a03994556945"} Oct 08 14:40:08 crc kubenswrapper[4624]: I1008 14:40:08.360759 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2976284aba79e6b9c9dcdba3ee26c4f4c3d08fdc2407c79898b3a03994556945" Oct 08 14:40:08 crc kubenswrapper[4624]: I1008 14:40:08.360765 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fsb77" Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.370671 4624 generic.go:334] "Generic (PLEG): container finished" podID="d112c8ce-f2c4-43a1-9ae8-e155473d5831" containerID="81b7c9d989c22df020f58ca68474a0a7ef92e9f0443afd51360bb11a10a37e11" exitCode=0 Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.370800 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v4bjq" event={"ID":"d112c8ce-f2c4-43a1-9ae8-e155473d5831","Type":"ContainerDied","Data":"81b7c9d989c22df020f58ca68474a0a7ef92e9f0443afd51360bb11a10a37e11"} Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.702340 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-22cr4"] Oct 08 14:40:09 crc kubenswrapper[4624]: E1008 14:40:09.702767 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6409de-1f13-4742-8956-3a1aa3bb60c7" containerName="mariadb-database-create" Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.702785 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6409de-1f13-4742-8956-3a1aa3bb60c7" containerName="mariadb-database-create" Oct 08 14:40:09 crc kubenswrapper[4624]: E1008 14:40:09.702800 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65bca6e6-25cd-4879-a092-b9e24b2aa1e3" containerName="mariadb-database-create" Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.702807 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="65bca6e6-25cd-4879-a092-b9e24b2aa1e3" containerName="mariadb-database-create" Oct 08 14:40:09 crc kubenswrapper[4624]: E1008 14:40:09.702826 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e56418a-dbf3-46b1-9f37-c004b9abf454" containerName="mariadb-database-create" Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.702833 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e56418a-dbf3-46b1-9f37-c004b9abf454" containerName="mariadb-database-create" Oct 08 14:40:09 crc kubenswrapper[4624]: E1008 14:40:09.702850 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a449627c-78bf-4c07-bda1-f9f3005d4782" containerName="mariadb-database-create" Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.702858 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a449627c-78bf-4c07-bda1-f9f3005d4782" containerName="mariadb-database-create" Oct 08 14:40:09 crc kubenswrapper[4624]: E1008 14:40:09.702879 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae7533e2-e271-4dcf-94c1-6bf63f0c4c54" containerName="mariadb-database-create" Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.702888 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae7533e2-e271-4dcf-94c1-6bf63f0c4c54" containerName="mariadb-database-create" Oct 08 14:40:09 crc kubenswrapper[4624]: E1008 14:40:09.702904 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5747e2-432a-4b1e-9a58-f65736759415" containerName="mariadb-database-create" Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.702912 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5747e2-432a-4b1e-9a58-f65736759415" containerName="mariadb-database-create" Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.703090 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="65bca6e6-25cd-4879-a092-b9e24b2aa1e3" containerName="mariadb-database-create" Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.703114 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c6409de-1f13-4742-8956-3a1aa3bb60c7" containerName="mariadb-database-create" Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.703162 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a449627c-78bf-4c07-bda1-f9f3005d4782" containerName="mariadb-database-create" Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.703182 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae7533e2-e271-4dcf-94c1-6bf63f0c4c54" containerName="mariadb-database-create" Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.703203 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5747e2-432a-4b1e-9a58-f65736759415" containerName="mariadb-database-create" Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.703220 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e56418a-dbf3-46b1-9f37-c004b9abf454" containerName="mariadb-database-create" Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.703907 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-22cr4" Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.711659 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-22cr4"] Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.771725 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rbl5\" (UniqueName: \"kubernetes.io/projected/437a3bae-10d6-48a7-b3f1-28988464e615-kube-api-access-2rbl5\") pod \"glance-db-create-22cr4\" (UID: \"437a3bae-10d6-48a7-b3f1-28988464e615\") " pod="openstack/glance-db-create-22cr4" Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.873263 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rbl5\" (UniqueName: \"kubernetes.io/projected/437a3bae-10d6-48a7-b3f1-28988464e615-kube-api-access-2rbl5\") pod \"glance-db-create-22cr4\" (UID: \"437a3bae-10d6-48a7-b3f1-28988464e615\") " pod="openstack/glance-db-create-22cr4" Oct 08 14:40:09 crc kubenswrapper[4624]: I1008 14:40:09.894841 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rbl5\" (UniqueName: \"kubernetes.io/projected/437a3bae-10d6-48a7-b3f1-28988464e615-kube-api-access-2rbl5\") pod \"glance-db-create-22cr4\" (UID: \"437a3bae-10d6-48a7-b3f1-28988464e615\") " pod="openstack/glance-db-create-22cr4" Oct 08 14:40:10 crc kubenswrapper[4624]: I1008 14:40:10.021737 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-22cr4" Oct 08 14:40:10 crc kubenswrapper[4624]: I1008 14:40:10.183553 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:40:10 crc kubenswrapper[4624]: I1008 14:40:10.251420 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58fb6fdf7-djbqf"] Oct 08 14:40:10 crc kubenswrapper[4624]: I1008 14:40:10.252110 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" podUID="3f8f2b5a-58c2-40be-9b42-6a474e13955a" containerName="dnsmasq-dns" containerID="cri-o://8ed0d7f93b0a926f9e01991a120d136e88008b2d9de432532d89189af032ef54" gracePeriod=10 Oct 08 14:40:10 crc kubenswrapper[4624]: I1008 14:40:10.388678 4624 generic.go:334] "Generic (PLEG): container finished" podID="3f8f2b5a-58c2-40be-9b42-6a474e13955a" containerID="8ed0d7f93b0a926f9e01991a120d136e88008b2d9de432532d89189af032ef54" exitCode=0 Oct 08 14:40:10 crc kubenswrapper[4624]: I1008 14:40:10.388964 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" event={"ID":"3f8f2b5a-58c2-40be-9b42-6a474e13955a","Type":"ContainerDied","Data":"8ed0d7f93b0a926f9e01991a120d136e88008b2d9de432532d89189af032ef54"} Oct 08 14:40:10 crc kubenswrapper[4624]: I1008 14:40:10.541736 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-22cr4"] Oct 08 14:40:10 crc kubenswrapper[4624]: I1008 14:40:10.876422 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" Oct 08 14:40:10 crc kubenswrapper[4624]: I1008 14:40:10.905448 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f8f2b5a-58c2-40be-9b42-6a474e13955a-dns-svc\") pod \"3f8f2b5a-58c2-40be-9b42-6a474e13955a\" (UID: \"3f8f2b5a-58c2-40be-9b42-6a474e13955a\") " Oct 08 14:40:10 crc kubenswrapper[4624]: I1008 14:40:10.905585 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr7j2\" (UniqueName: \"kubernetes.io/projected/3f8f2b5a-58c2-40be-9b42-6a474e13955a-kube-api-access-xr7j2\") pod \"3f8f2b5a-58c2-40be-9b42-6a474e13955a\" (UID: \"3f8f2b5a-58c2-40be-9b42-6a474e13955a\") " Oct 08 14:40:10 crc kubenswrapper[4624]: I1008 14:40:10.905760 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f8f2b5a-58c2-40be-9b42-6a474e13955a-config\") pod \"3f8f2b5a-58c2-40be-9b42-6a474e13955a\" (UID: \"3f8f2b5a-58c2-40be-9b42-6a474e13955a\") " Oct 08 14:40:10 crc kubenswrapper[4624]: I1008 14:40:10.942247 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f8f2b5a-58c2-40be-9b42-6a474e13955a-kube-api-access-xr7j2" (OuterVolumeSpecName: "kube-api-access-xr7j2") pod "3f8f2b5a-58c2-40be-9b42-6a474e13955a" (UID: "3f8f2b5a-58c2-40be-9b42-6a474e13955a"). InnerVolumeSpecName "kube-api-access-xr7j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:10 crc kubenswrapper[4624]: I1008 14:40:10.979591 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f8f2b5a-58c2-40be-9b42-6a474e13955a-config" (OuterVolumeSpecName: "config") pod "3f8f2b5a-58c2-40be-9b42-6a474e13955a" (UID: "3f8f2b5a-58c2-40be-9b42-6a474e13955a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:10 crc kubenswrapper[4624]: I1008 14:40:10.982460 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f8f2b5a-58c2-40be-9b42-6a474e13955a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3f8f2b5a-58c2-40be-9b42-6a474e13955a" (UID: "3f8f2b5a-58c2-40be-9b42-6a474e13955a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.007929 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr7j2\" (UniqueName: \"kubernetes.io/projected/3f8f2b5a-58c2-40be-9b42-6a474e13955a-kube-api-access-xr7j2\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.007967 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f8f2b5a-58c2-40be-9b42-6a474e13955a-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.007996 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f8f2b5a-58c2-40be-9b42-6a474e13955a-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.039420 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.109094 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d112c8ce-f2c4-43a1-9ae8-e155473d5831-swiftconf\") pod \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.109215 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d112c8ce-f2c4-43a1-9ae8-e155473d5831-ring-data-devices\") pod \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.109274 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d112c8ce-f2c4-43a1-9ae8-e155473d5831-etc-swift\") pod \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.109347 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d112c8ce-f2c4-43a1-9ae8-e155473d5831-scripts\") pod \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.109391 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d112c8ce-f2c4-43a1-9ae8-e155473d5831-dispersionconf\") pod \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.109461 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8f4p\" (UniqueName: \"kubernetes.io/projected/d112c8ce-f2c4-43a1-9ae8-e155473d5831-kube-api-access-b8f4p\") pod \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.109502 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d112c8ce-f2c4-43a1-9ae8-e155473d5831-combined-ca-bundle\") pod \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\" (UID: \"d112c8ce-f2c4-43a1-9ae8-e155473d5831\") " Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.109990 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d112c8ce-f2c4-43a1-9ae8-e155473d5831-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d112c8ce-f2c4-43a1-9ae8-e155473d5831" (UID: "d112c8ce-f2c4-43a1-9ae8-e155473d5831"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.110827 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d112c8ce-f2c4-43a1-9ae8-e155473d5831-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d112c8ce-f2c4-43a1-9ae8-e155473d5831" (UID: "d112c8ce-f2c4-43a1-9ae8-e155473d5831"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.112676 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d112c8ce-f2c4-43a1-9ae8-e155473d5831-kube-api-access-b8f4p" (OuterVolumeSpecName: "kube-api-access-b8f4p") pod "d112c8ce-f2c4-43a1-9ae8-e155473d5831" (UID: "d112c8ce-f2c4-43a1-9ae8-e155473d5831"). InnerVolumeSpecName "kube-api-access-b8f4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.117024 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d112c8ce-f2c4-43a1-9ae8-e155473d5831-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d112c8ce-f2c4-43a1-9ae8-e155473d5831" (UID: "d112c8ce-f2c4-43a1-9ae8-e155473d5831"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.129990 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d112c8ce-f2c4-43a1-9ae8-e155473d5831-scripts" (OuterVolumeSpecName: "scripts") pod "d112c8ce-f2c4-43a1-9ae8-e155473d5831" (UID: "d112c8ce-f2c4-43a1-9ae8-e155473d5831"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.135410 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d112c8ce-f2c4-43a1-9ae8-e155473d5831-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d112c8ce-f2c4-43a1-9ae8-e155473d5831" (UID: "d112c8ce-f2c4-43a1-9ae8-e155473d5831"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.138425 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d112c8ce-f2c4-43a1-9ae8-e155473d5831-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d112c8ce-f2c4-43a1-9ae8-e155473d5831" (UID: "d112c8ce-f2c4-43a1-9ae8-e155473d5831"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.211879 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8f4p\" (UniqueName: \"kubernetes.io/projected/d112c8ce-f2c4-43a1-9ae8-e155473d5831-kube-api-access-b8f4p\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.211912 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d112c8ce-f2c4-43a1-9ae8-e155473d5831-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.211921 4624 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d112c8ce-f2c4-43a1-9ae8-e155473d5831-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.211930 4624 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d112c8ce-f2c4-43a1-9ae8-e155473d5831-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.211938 4624 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d112c8ce-f2c4-43a1-9ae8-e155473d5831-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.211947 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d112c8ce-f2c4-43a1-9ae8-e155473d5831-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.211955 4624 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d112c8ce-f2c4-43a1-9ae8-e155473d5831-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.398468 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.399720 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58fb6fdf7-djbqf" event={"ID":"3f8f2b5a-58c2-40be-9b42-6a474e13955a","Type":"ContainerDied","Data":"acd8ea0af80da75221e0aaec8433e9fe3a6a6d29ec2970e4d3b63013416a83ec"} Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.399840 4624 scope.go:117] "RemoveContainer" containerID="8ed0d7f93b0a926f9e01991a120d136e88008b2d9de432532d89189af032ef54" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.404179 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v4bjq" event={"ID":"d112c8ce-f2c4-43a1-9ae8-e155473d5831","Type":"ContainerDied","Data":"ca7a7e6fc0aea0baac69dc0c31c358db3e2a3bbd8fccd6bc85c1d3bd79b68c36"} Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.404268 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca7a7e6fc0aea0baac69dc0c31c358db3e2a3bbd8fccd6bc85c1d3bd79b68c36" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.404199 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v4bjq" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.406398 4624 generic.go:334] "Generic (PLEG): container finished" podID="437a3bae-10d6-48a7-b3f1-28988464e615" containerID="e0f79010ed21da611a62ead15c25bef52bd48612e32360a74025568a44da884d" exitCode=0 Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.406598 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-22cr4" event={"ID":"437a3bae-10d6-48a7-b3f1-28988464e615","Type":"ContainerDied","Data":"e0f79010ed21da611a62ead15c25bef52bd48612e32360a74025568a44da884d"} Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.406729 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-22cr4" event={"ID":"437a3bae-10d6-48a7-b3f1-28988464e615","Type":"ContainerStarted","Data":"3fb18a564df42fff3fe5635354217cb4a4cd38e950c9e050247b423e6d88539a"} Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.443221 4624 scope.go:117] "RemoveContainer" containerID="3b110e4e2907080145bb24fce0ac0bd09ee84cbff3c27dde0436929ceb9d6fae" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.476556 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58fb6fdf7-djbqf"] Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.476599 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58fb6fdf7-djbqf"] Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.704504 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-66fe-account-create-l4qnn"] Oct 08 14:40:11 crc kubenswrapper[4624]: E1008 14:40:11.704862 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d112c8ce-f2c4-43a1-9ae8-e155473d5831" containerName="swift-ring-rebalance" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.704879 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d112c8ce-f2c4-43a1-9ae8-e155473d5831" containerName="swift-ring-rebalance" Oct 08 14:40:11 crc kubenswrapper[4624]: E1008 14:40:11.704904 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8f2b5a-58c2-40be-9b42-6a474e13955a" containerName="init" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.704909 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8f2b5a-58c2-40be-9b42-6a474e13955a" containerName="init" Oct 08 14:40:11 crc kubenswrapper[4624]: E1008 14:40:11.704923 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8f2b5a-58c2-40be-9b42-6a474e13955a" containerName="dnsmasq-dns" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.704929 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8f2b5a-58c2-40be-9b42-6a474e13955a" containerName="dnsmasq-dns" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.705083 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d112c8ce-f2c4-43a1-9ae8-e155473d5831" containerName="swift-ring-rebalance" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.705129 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f8f2b5a-58c2-40be-9b42-6a474e13955a" containerName="dnsmasq-dns" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.705795 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-66fe-account-create-l4qnn" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.713685 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-66fe-account-create-l4qnn"] Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.715422 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.820774 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t476r\" (UniqueName: \"kubernetes.io/projected/313a7281-f294-44b0-be3f-f30d66b0470c-kube-api-access-t476r\") pod \"cinder-66fe-account-create-l4qnn\" (UID: \"313a7281-f294-44b0-be3f-f30d66b0470c\") " pod="openstack/cinder-66fe-account-create-l4qnn" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.892881 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-9f4c-account-create-rskn9"] Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.894200 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-9f4c-account-create-rskn9" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.896935 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.909881 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-9f4c-account-create-rskn9"] Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.922858 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t476r\" (UniqueName: \"kubernetes.io/projected/313a7281-f294-44b0-be3f-f30d66b0470c-kube-api-access-t476r\") pod \"cinder-66fe-account-create-l4qnn\" (UID: \"313a7281-f294-44b0-be3f-f30d66b0470c\") " pod="openstack/cinder-66fe-account-create-l4qnn" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.922925 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbxbm\" (UniqueName: \"kubernetes.io/projected/0fa399e1-303c-4d95-be42-ea2eb2cd38b2-kube-api-access-dbxbm\") pod \"heat-9f4c-account-create-rskn9\" (UID: \"0fa399e1-303c-4d95-be42-ea2eb2cd38b2\") " pod="openstack/heat-9f4c-account-create-rskn9" Oct 08 14:40:11 crc kubenswrapper[4624]: I1008 14:40:11.968149 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t476r\" (UniqueName: \"kubernetes.io/projected/313a7281-f294-44b0-be3f-f30d66b0470c-kube-api-access-t476r\") pod \"cinder-66fe-account-create-l4qnn\" (UID: \"313a7281-f294-44b0-be3f-f30d66b0470c\") " pod="openstack/cinder-66fe-account-create-l4qnn" Oct 08 14:40:12 crc kubenswrapper[4624]: I1008 14:40:12.026730 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbxbm\" (UniqueName: \"kubernetes.io/projected/0fa399e1-303c-4d95-be42-ea2eb2cd38b2-kube-api-access-dbxbm\") pod \"heat-9f4c-account-create-rskn9\" (UID: \"0fa399e1-303c-4d95-be42-ea2eb2cd38b2\") " pod="openstack/heat-9f4c-account-create-rskn9" Oct 08 14:40:12 crc kubenswrapper[4624]: I1008 14:40:12.026796 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-66fe-account-create-l4qnn" Oct 08 14:40:12 crc kubenswrapper[4624]: I1008 14:40:12.076229 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbxbm\" (UniqueName: \"kubernetes.io/projected/0fa399e1-303c-4d95-be42-ea2eb2cd38b2-kube-api-access-dbxbm\") pod \"heat-9f4c-account-create-rskn9\" (UID: \"0fa399e1-303c-4d95-be42-ea2eb2cd38b2\") " pod="openstack/heat-9f4c-account-create-rskn9" Oct 08 14:40:12 crc kubenswrapper[4624]: I1008 14:40:12.233278 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-df8b-account-create-brp6p"] Oct 08 14:40:12 crc kubenswrapper[4624]: I1008 14:40:12.234351 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-df8b-account-create-brp6p" Oct 08 14:40:12 crc kubenswrapper[4624]: I1008 14:40:12.236722 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 08 14:40:12 crc kubenswrapper[4624]: I1008 14:40:12.251606 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-9f4c-account-create-rskn9" Oct 08 14:40:12 crc kubenswrapper[4624]: I1008 14:40:12.262825 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-df8b-account-create-brp6p"] Oct 08 14:40:12 crc kubenswrapper[4624]: I1008 14:40:12.335175 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6drj\" (UniqueName: \"kubernetes.io/projected/911ca055-f986-45c5-9e4c-887293b628f5-kube-api-access-r6drj\") pod \"neutron-df8b-account-create-brp6p\" (UID: \"911ca055-f986-45c5-9e4c-887293b628f5\") " pod="openstack/neutron-df8b-account-create-brp6p" Oct 08 14:40:12 crc kubenswrapper[4624]: I1008 14:40:12.436740 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6drj\" (UniqueName: \"kubernetes.io/projected/911ca055-f986-45c5-9e4c-887293b628f5-kube-api-access-r6drj\") pod \"neutron-df8b-account-create-brp6p\" (UID: \"911ca055-f986-45c5-9e4c-887293b628f5\") " pod="openstack/neutron-df8b-account-create-brp6p" Oct 08 14:40:12 crc kubenswrapper[4624]: I1008 14:40:12.468680 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6drj\" (UniqueName: \"kubernetes.io/projected/911ca055-f986-45c5-9e4c-887293b628f5-kube-api-access-r6drj\") pod \"neutron-df8b-account-create-brp6p\" (UID: \"911ca055-f986-45c5-9e4c-887293b628f5\") " pod="openstack/neutron-df8b-account-create-brp6p" Oct 08 14:40:12 crc kubenswrapper[4624]: I1008 14:40:12.555140 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-df8b-account-create-brp6p" Oct 08 14:40:12 crc kubenswrapper[4624]: I1008 14:40:12.703151 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-66fe-account-create-l4qnn"] Oct 08 14:40:12 crc kubenswrapper[4624]: W1008 14:40:12.712733 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod313a7281_f294_44b0_be3f_f30d66b0470c.slice/crio-2ed12dbae02e7b6e9cdeed924bcadb265f7ab734b334e56a84d71d1ef192ffdb WatchSource:0}: Error finding container 2ed12dbae02e7b6e9cdeed924bcadb265f7ab734b334e56a84d71d1ef192ffdb: Status 404 returned error can't find the container with id 2ed12dbae02e7b6e9cdeed924bcadb265f7ab734b334e56a84d71d1ef192ffdb Oct 08 14:40:12 crc kubenswrapper[4624]: I1008 14:40:12.801463 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-9f4c-account-create-rskn9"] Oct 08 14:40:12 crc kubenswrapper[4624]: I1008 14:40:12.831701 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-22cr4" Oct 08 14:40:12 crc kubenswrapper[4624]: I1008 14:40:12.845403 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rbl5\" (UniqueName: \"kubernetes.io/projected/437a3bae-10d6-48a7-b3f1-28988464e615-kube-api-access-2rbl5\") pod \"437a3bae-10d6-48a7-b3f1-28988464e615\" (UID: \"437a3bae-10d6-48a7-b3f1-28988464e615\") " Oct 08 14:40:12 crc kubenswrapper[4624]: I1008 14:40:12.852024 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/437a3bae-10d6-48a7-b3f1-28988464e615-kube-api-access-2rbl5" (OuterVolumeSpecName: "kube-api-access-2rbl5") pod "437a3bae-10d6-48a7-b3f1-28988464e615" (UID: "437a3bae-10d6-48a7-b3f1-28988464e615"). InnerVolumeSpecName "kube-api-access-2rbl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:12 crc kubenswrapper[4624]: I1008 14:40:12.947095 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rbl5\" (UniqueName: \"kubernetes.io/projected/437a3bae-10d6-48a7-b3f1-28988464e615-kube-api-access-2rbl5\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:13 crc kubenswrapper[4624]: I1008 14:40:13.057657 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-df8b-account-create-brp6p"] Oct 08 14:40:13 crc kubenswrapper[4624]: I1008 14:40:13.431280 4624 generic.go:334] "Generic (PLEG): container finished" podID="0fa399e1-303c-4d95-be42-ea2eb2cd38b2" containerID="18295076c5711c0f9bc231bd7399aef397c237b2f274087f8895fa7c1af03fed" exitCode=0 Oct 08 14:40:13 crc kubenswrapper[4624]: I1008 14:40:13.431505 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-9f4c-account-create-rskn9" event={"ID":"0fa399e1-303c-4d95-be42-ea2eb2cd38b2","Type":"ContainerDied","Data":"18295076c5711c0f9bc231bd7399aef397c237b2f274087f8895fa7c1af03fed"} Oct 08 14:40:13 crc kubenswrapper[4624]: I1008 14:40:13.431711 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-9f4c-account-create-rskn9" event={"ID":"0fa399e1-303c-4d95-be42-ea2eb2cd38b2","Type":"ContainerStarted","Data":"5d3965c4ede2bd152331e6e4b7484a34a369da848ffacedac8c41094255aa2d4"} Oct 08 14:40:13 crc kubenswrapper[4624]: I1008 14:40:13.433497 4624 generic.go:334] "Generic (PLEG): container finished" podID="313a7281-f294-44b0-be3f-f30d66b0470c" containerID="1f4dc9a3693927fa09aa9e5e82a1bb606ceb63ac4ad39ee637b4a2a25dec7207" exitCode=0 Oct 08 14:40:13 crc kubenswrapper[4624]: I1008 14:40:13.433540 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-66fe-account-create-l4qnn" event={"ID":"313a7281-f294-44b0-be3f-f30d66b0470c","Type":"ContainerDied","Data":"1f4dc9a3693927fa09aa9e5e82a1bb606ceb63ac4ad39ee637b4a2a25dec7207"} Oct 08 14:40:13 crc kubenswrapper[4624]: I1008 14:40:13.433555 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-66fe-account-create-l4qnn" event={"ID":"313a7281-f294-44b0-be3f-f30d66b0470c","Type":"ContainerStarted","Data":"2ed12dbae02e7b6e9cdeed924bcadb265f7ab734b334e56a84d71d1ef192ffdb"} Oct 08 14:40:13 crc kubenswrapper[4624]: I1008 14:40:13.435988 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-22cr4" event={"ID":"437a3bae-10d6-48a7-b3f1-28988464e615","Type":"ContainerDied","Data":"3fb18a564df42fff3fe5635354217cb4a4cd38e950c9e050247b423e6d88539a"} Oct 08 14:40:13 crc kubenswrapper[4624]: I1008 14:40:13.436011 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb18a564df42fff3fe5635354217cb4a4cd38e950c9e050247b423e6d88539a" Oct 08 14:40:13 crc kubenswrapper[4624]: I1008 14:40:13.436067 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-22cr4" Oct 08 14:40:13 crc kubenswrapper[4624]: I1008 14:40:13.438342 4624 generic.go:334] "Generic (PLEG): container finished" podID="911ca055-f986-45c5-9e4c-887293b628f5" containerID="a08c5a9c2018aa0d54b5653835390b457a45df63fc27622ed699c7b67756d9c8" exitCode=0 Oct 08 14:40:13 crc kubenswrapper[4624]: I1008 14:40:13.438407 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df8b-account-create-brp6p" event={"ID":"911ca055-f986-45c5-9e4c-887293b628f5","Type":"ContainerDied","Data":"a08c5a9c2018aa0d54b5653835390b457a45df63fc27622ed699c7b67756d9c8"} Oct 08 14:40:13 crc kubenswrapper[4624]: I1008 14:40:13.439520 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df8b-account-create-brp6p" event={"ID":"911ca055-f986-45c5-9e4c-887293b628f5","Type":"ContainerStarted","Data":"45a3e2b8e8d92a9cd671de8abb11f9a9eb7e8c26f5173948c8d935b0a0a7101f"} Oct 08 14:40:13 crc kubenswrapper[4624]: I1008 14:40:13.484594 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f8f2b5a-58c2-40be-9b42-6a474e13955a" path="/var/lib/kubelet/pods/3f8f2b5a-58c2-40be-9b42-6a474e13955a/volumes" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.167607 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-611c-account-create-vzdj5"] Oct 08 14:40:14 crc kubenswrapper[4624]: E1008 14:40:14.168223 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="437a3bae-10d6-48a7-b3f1-28988464e615" containerName="mariadb-database-create" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.168404 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="437a3bae-10d6-48a7-b3f1-28988464e615" containerName="mariadb-database-create" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.168713 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="437a3bae-10d6-48a7-b3f1-28988464e615" containerName="mariadb-database-create" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.169492 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-611c-account-create-vzdj5" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.171343 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.184799 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-611c-account-create-vzdj5"] Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.268511 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5qgs\" (UniqueName: \"kubernetes.io/projected/5536330e-7457-4138-a4c4-c511f839b45b-kube-api-access-m5qgs\") pod \"keystone-611c-account-create-vzdj5\" (UID: \"5536330e-7457-4138-a4c4-c511f839b45b\") " pod="openstack/keystone-611c-account-create-vzdj5" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.370048 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5qgs\" (UniqueName: \"kubernetes.io/projected/5536330e-7457-4138-a4c4-c511f839b45b-kube-api-access-m5qgs\") pod \"keystone-611c-account-create-vzdj5\" (UID: \"5536330e-7457-4138-a4c4-c511f839b45b\") " pod="openstack/keystone-611c-account-create-vzdj5" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.397614 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5qgs\" (UniqueName: \"kubernetes.io/projected/5536330e-7457-4138-a4c4-c511f839b45b-kube-api-access-m5qgs\") pod \"keystone-611c-account-create-vzdj5\" (UID: \"5536330e-7457-4138-a4c4-c511f839b45b\") " pod="openstack/keystone-611c-account-create-vzdj5" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.496338 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-611c-account-create-vzdj5" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.565094 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7a5c-account-create-gvjwb"] Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.570473 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7a5c-account-create-gvjwb" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.574541 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.578439 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4j49\" (UniqueName: \"kubernetes.io/projected/f6b50601-d82a-485f-972b-0846b5ff00a3-kube-api-access-m4j49\") pod \"placement-7a5c-account-create-gvjwb\" (UID: \"f6b50601-d82a-485f-972b-0846b5ff00a3\") " pod="openstack/placement-7a5c-account-create-gvjwb" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.594400 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7a5c-account-create-gvjwb"] Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.685474 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4j49\" (UniqueName: \"kubernetes.io/projected/f6b50601-d82a-485f-972b-0846b5ff00a3-kube-api-access-m4j49\") pod \"placement-7a5c-account-create-gvjwb\" (UID: \"f6b50601-d82a-485f-972b-0846b5ff00a3\") " pod="openstack/placement-7a5c-account-create-gvjwb" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.719968 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4j49\" (UniqueName: \"kubernetes.io/projected/f6b50601-d82a-485f-972b-0846b5ff00a3-kube-api-access-m4j49\") pod \"placement-7a5c-account-create-gvjwb\" (UID: \"f6b50601-d82a-485f-972b-0846b5ff00a3\") " pod="openstack/placement-7a5c-account-create-gvjwb" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.820570 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-df8b-account-create-brp6p" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.888375 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6drj\" (UniqueName: \"kubernetes.io/projected/911ca055-f986-45c5-9e4c-887293b628f5-kube-api-access-r6drj\") pod \"911ca055-f986-45c5-9e4c-887293b628f5\" (UID: \"911ca055-f986-45c5-9e4c-887293b628f5\") " Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.894557 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/911ca055-f986-45c5-9e4c-887293b628f5-kube-api-access-r6drj" (OuterVolumeSpecName: "kube-api-access-r6drj") pod "911ca055-f986-45c5-9e4c-887293b628f5" (UID: "911ca055-f986-45c5-9e4c-887293b628f5"). InnerVolumeSpecName "kube-api-access-r6drj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.934678 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-9f4c-account-create-rskn9" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.939751 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7a5c-account-create-gvjwb" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.979702 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-66fe-account-create-l4qnn" Oct 08 14:40:14 crc kubenswrapper[4624]: I1008 14:40:14.991439 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6drj\" (UniqueName: \"kubernetes.io/projected/911ca055-f986-45c5-9e4c-887293b628f5-kube-api-access-r6drj\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.093274 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t476r\" (UniqueName: \"kubernetes.io/projected/313a7281-f294-44b0-be3f-f30d66b0470c-kube-api-access-t476r\") pod \"313a7281-f294-44b0-be3f-f30d66b0470c\" (UID: \"313a7281-f294-44b0-be3f-f30d66b0470c\") " Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.093407 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbxbm\" (UniqueName: \"kubernetes.io/projected/0fa399e1-303c-4d95-be42-ea2eb2cd38b2-kube-api-access-dbxbm\") pod \"0fa399e1-303c-4d95-be42-ea2eb2cd38b2\" (UID: \"0fa399e1-303c-4d95-be42-ea2eb2cd38b2\") " Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.097531 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313a7281-f294-44b0-be3f-f30d66b0470c-kube-api-access-t476r" (OuterVolumeSpecName: "kube-api-access-t476r") pod "313a7281-f294-44b0-be3f-f30d66b0470c" (UID: "313a7281-f294-44b0-be3f-f30d66b0470c"). InnerVolumeSpecName "kube-api-access-t476r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.097595 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fa399e1-303c-4d95-be42-ea2eb2cd38b2-kube-api-access-dbxbm" (OuterVolumeSpecName: "kube-api-access-dbxbm") pod "0fa399e1-303c-4d95-be42-ea2eb2cd38b2" (UID: "0fa399e1-303c-4d95-be42-ea2eb2cd38b2"). InnerVolumeSpecName "kube-api-access-dbxbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.148907 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-611c-account-create-vzdj5"] Oct 08 14:40:15 crc kubenswrapper[4624]: W1008 14:40:15.153955 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5536330e_7457_4138_a4c4_c511f839b45b.slice/crio-5aaa20cb86c0a2ac9cdf67441e0a6b1ece5a8053fc48e6c8fe43b5a0208e110e WatchSource:0}: Error finding container 5aaa20cb86c0a2ac9cdf67441e0a6b1ece5a8053fc48e6c8fe43b5a0208e110e: Status 404 returned error can't find the container with id 5aaa20cb86c0a2ac9cdf67441e0a6b1ece5a8053fc48e6c8fe43b5a0208e110e Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.196657 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t476r\" (UniqueName: \"kubernetes.io/projected/313a7281-f294-44b0-be3f-f30d66b0470c-kube-api-access-t476r\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.196695 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbxbm\" (UniqueName: \"kubernetes.io/projected/0fa399e1-303c-4d95-be42-ea2eb2cd38b2-kube-api-access-dbxbm\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.348809 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7a5c-account-create-gvjwb"] Oct 08 14:40:15 crc kubenswrapper[4624]: W1008 14:40:15.377929 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6b50601_d82a_485f_972b_0846b5ff00a3.slice/crio-03737024a2c84e1b0304018adee80347ab1023ee40e7991d9d87d06897f17c62 WatchSource:0}: Error finding container 03737024a2c84e1b0304018adee80347ab1023ee40e7991d9d87d06897f17c62: Status 404 returned error can't find the container with id 03737024a2c84e1b0304018adee80347ab1023ee40e7991d9d87d06897f17c62 Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.454397 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df8b-account-create-brp6p" event={"ID":"911ca055-f986-45c5-9e4c-887293b628f5","Type":"ContainerDied","Data":"45a3e2b8e8d92a9cd671de8abb11f9a9eb7e8c26f5173948c8d935b0a0a7101f"} Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.454781 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45a3e2b8e8d92a9cd671de8abb11f9a9eb7e8c26f5173948c8d935b0a0a7101f" Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.454422 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-df8b-account-create-brp6p" Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.456531 4624 generic.go:334] "Generic (PLEG): container finished" podID="5536330e-7457-4138-a4c4-c511f839b45b" containerID="c495212cee53eaee149dc7e22847022abb8125306fedbaf12617a4ee7b7713e2" exitCode=0 Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.456616 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-611c-account-create-vzdj5" event={"ID":"5536330e-7457-4138-a4c4-c511f839b45b","Type":"ContainerDied","Data":"c495212cee53eaee149dc7e22847022abb8125306fedbaf12617a4ee7b7713e2"} Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.456660 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-611c-account-create-vzdj5" event={"ID":"5536330e-7457-4138-a4c4-c511f839b45b","Type":"ContainerStarted","Data":"5aaa20cb86c0a2ac9cdf67441e0a6b1ece5a8053fc48e6c8fe43b5a0208e110e"} Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.458169 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7a5c-account-create-gvjwb" event={"ID":"f6b50601-d82a-485f-972b-0846b5ff00a3","Type":"ContainerStarted","Data":"03737024a2c84e1b0304018adee80347ab1023ee40e7991d9d87d06897f17c62"} Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.459439 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-9f4c-account-create-rskn9" event={"ID":"0fa399e1-303c-4d95-be42-ea2eb2cd38b2","Type":"ContainerDied","Data":"5d3965c4ede2bd152331e6e4b7484a34a369da848ffacedac8c41094255aa2d4"} Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.459463 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d3965c4ede2bd152331e6e4b7484a34a369da848ffacedac8c41094255aa2d4" Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.459499 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-9f4c-account-create-rskn9" Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.462252 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-66fe-account-create-l4qnn" event={"ID":"313a7281-f294-44b0-be3f-f30d66b0470c","Type":"ContainerDied","Data":"2ed12dbae02e7b6e9cdeed924bcadb265f7ab734b334e56a84d71d1ef192ffdb"} Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.462274 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ed12dbae02e7b6e9cdeed924bcadb265f7ab734b334e56a84d71d1ef192ffdb" Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.462297 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-66fe-account-create-l4qnn" Oct 08 14:40:15 crc kubenswrapper[4624]: I1008 14:40:15.639813 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-c4zfm" podUID="c5312bac-042b-48c5-bf82-1f565e25f11e" containerName="ovn-controller" probeResult="failure" output=< Oct 08 14:40:15 crc kubenswrapper[4624]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 08 14:40:15 crc kubenswrapper[4624]: > Oct 08 14:40:16 crc kubenswrapper[4624]: I1008 14:40:16.471792 4624 generic.go:334] "Generic (PLEG): container finished" podID="f6b50601-d82a-485f-972b-0846b5ff00a3" containerID="bd07234c85207270617760fc40256f4f85567d95f3083ad3cdc0c7e2cb7bbf5c" exitCode=0 Oct 08 14:40:16 crc kubenswrapper[4624]: I1008 14:40:16.472277 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7a5c-account-create-gvjwb" event={"ID":"f6b50601-d82a-485f-972b-0846b5ff00a3","Type":"ContainerDied","Data":"bd07234c85207270617760fc40256f4f85567d95f3083ad3cdc0c7e2cb7bbf5c"} Oct 08 14:40:16 crc kubenswrapper[4624]: I1008 14:40:16.766578 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-611c-account-create-vzdj5" Oct 08 14:40:16 crc kubenswrapper[4624]: I1008 14:40:16.923695 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5qgs\" (UniqueName: \"kubernetes.io/projected/5536330e-7457-4138-a4c4-c511f839b45b-kube-api-access-m5qgs\") pod \"5536330e-7457-4138-a4c4-c511f839b45b\" (UID: \"5536330e-7457-4138-a4c4-c511f839b45b\") " Oct 08 14:40:16 crc kubenswrapper[4624]: I1008 14:40:16.929492 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5536330e-7457-4138-a4c4-c511f839b45b-kube-api-access-m5qgs" (OuterVolumeSpecName: "kube-api-access-m5qgs") pod "5536330e-7457-4138-a4c4-c511f839b45b" (UID: "5536330e-7457-4138-a4c4-c511f839b45b"). InnerVolumeSpecName "kube-api-access-m5qgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:17 crc kubenswrapper[4624]: I1008 14:40:17.026233 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5qgs\" (UniqueName: \"kubernetes.io/projected/5536330e-7457-4138-a4c4-c511f839b45b-kube-api-access-m5qgs\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:17 crc kubenswrapper[4624]: I1008 14:40:17.483475 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-611c-account-create-vzdj5" Oct 08 14:40:17 crc kubenswrapper[4624]: I1008 14:40:17.487089 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-611c-account-create-vzdj5" event={"ID":"5536330e-7457-4138-a4c4-c511f839b45b","Type":"ContainerDied","Data":"5aaa20cb86c0a2ac9cdf67441e0a6b1ece5a8053fc48e6c8fe43b5a0208e110e"} Oct 08 14:40:17 crc kubenswrapper[4624]: I1008 14:40:17.487150 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aaa20cb86c0a2ac9cdf67441e0a6b1ece5a8053fc48e6c8fe43b5a0208e110e" Oct 08 14:40:17 crc kubenswrapper[4624]: I1008 14:40:17.770743 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7a5c-account-create-gvjwb" Oct 08 14:40:17 crc kubenswrapper[4624]: I1008 14:40:17.942280 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4j49\" (UniqueName: \"kubernetes.io/projected/f6b50601-d82a-485f-972b-0846b5ff00a3-kube-api-access-m4j49\") pod \"f6b50601-d82a-485f-972b-0846b5ff00a3\" (UID: \"f6b50601-d82a-485f-972b-0846b5ff00a3\") " Oct 08 14:40:17 crc kubenswrapper[4624]: I1008 14:40:17.945125 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b50601-d82a-485f-972b-0846b5ff00a3-kube-api-access-m4j49" (OuterVolumeSpecName: "kube-api-access-m4j49") pod "f6b50601-d82a-485f-972b-0846b5ff00a3" (UID: "f6b50601-d82a-485f-972b-0846b5ff00a3"). InnerVolumeSpecName "kube-api-access-m4j49". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:18 crc kubenswrapper[4624]: I1008 14:40:18.043770 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4j49\" (UniqueName: \"kubernetes.io/projected/f6b50601-d82a-485f-972b-0846b5ff00a3-kube-api-access-m4j49\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:18 crc kubenswrapper[4624]: I1008 14:40:18.050882 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 08 14:40:18 crc kubenswrapper[4624]: I1008 14:40:18.491098 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7a5c-account-create-gvjwb" event={"ID":"f6b50601-d82a-485f-972b-0846b5ff00a3","Type":"ContainerDied","Data":"03737024a2c84e1b0304018adee80347ab1023ee40e7991d9d87d06897f17c62"} Oct 08 14:40:18 crc kubenswrapper[4624]: I1008 14:40:18.491139 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03737024a2c84e1b0304018adee80347ab1023ee40e7991d9d87d06897f17c62" Oct 08 14:40:18 crc kubenswrapper[4624]: I1008 14:40:18.491168 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7a5c-account-create-gvjwb" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.672537 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-kzh2n"] Oct 08 14:40:19 crc kubenswrapper[4624]: E1008 14:40:19.674814 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911ca055-f986-45c5-9e4c-887293b628f5" containerName="mariadb-account-create" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.674936 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="911ca055-f986-45c5-9e4c-887293b628f5" containerName="mariadb-account-create" Oct 08 14:40:19 crc kubenswrapper[4624]: E1008 14:40:19.675013 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313a7281-f294-44b0-be3f-f30d66b0470c" containerName="mariadb-account-create" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.675074 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="313a7281-f294-44b0-be3f-f30d66b0470c" containerName="mariadb-account-create" Oct 08 14:40:19 crc kubenswrapper[4624]: E1008 14:40:19.675469 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b50601-d82a-485f-972b-0846b5ff00a3" containerName="mariadb-account-create" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.675526 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b50601-d82a-485f-972b-0846b5ff00a3" containerName="mariadb-account-create" Oct 08 14:40:19 crc kubenswrapper[4624]: E1008 14:40:19.675583 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa399e1-303c-4d95-be42-ea2eb2cd38b2" containerName="mariadb-account-create" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.675661 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa399e1-303c-4d95-be42-ea2eb2cd38b2" containerName="mariadb-account-create" Oct 08 14:40:19 crc kubenswrapper[4624]: E1008 14:40:19.676026 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5536330e-7457-4138-a4c4-c511f839b45b" containerName="mariadb-account-create" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.676105 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="5536330e-7457-4138-a4c4-c511f839b45b" containerName="mariadb-account-create" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.676456 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="313a7281-f294-44b0-be3f-f30d66b0470c" containerName="mariadb-account-create" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.676535 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6b50601-d82a-485f-972b-0846b5ff00a3" containerName="mariadb-account-create" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.676601 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="911ca055-f986-45c5-9e4c-887293b628f5" containerName="mariadb-account-create" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.676683 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="5536330e-7457-4138-a4c4-c511f839b45b" containerName="mariadb-account-create" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.677953 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fa399e1-303c-4d95-be42-ea2eb2cd38b2" containerName="mariadb-account-create" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.679024 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kzh2n" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.683195 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.683240 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.687339 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.693447 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kzh2n"] Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.699967 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r9xgf" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.768007 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c45mw\" (UniqueName: \"kubernetes.io/projected/8c53d0a2-ccb4-43e8-8a32-e327f0062e46-kube-api-access-c45mw\") pod \"keystone-db-sync-kzh2n\" (UID: \"8c53d0a2-ccb4-43e8-8a32-e327f0062e46\") " pod="openstack/keystone-db-sync-kzh2n" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.768090 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c53d0a2-ccb4-43e8-8a32-e327f0062e46-combined-ca-bundle\") pod \"keystone-db-sync-kzh2n\" (UID: \"8c53d0a2-ccb4-43e8-8a32-e327f0062e46\") " pod="openstack/keystone-db-sync-kzh2n" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.768152 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c53d0a2-ccb4-43e8-8a32-e327f0062e46-config-data\") pod \"keystone-db-sync-kzh2n\" (UID: \"8c53d0a2-ccb4-43e8-8a32-e327f0062e46\") " pod="openstack/keystone-db-sync-kzh2n" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.811331 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-74e3-account-create-nrtpc"] Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.812580 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-74e3-account-create-nrtpc" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.816605 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.821308 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-74e3-account-create-nrtpc"] Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.869102 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c53d0a2-ccb4-43e8-8a32-e327f0062e46-combined-ca-bundle\") pod \"keystone-db-sync-kzh2n\" (UID: \"8c53d0a2-ccb4-43e8-8a32-e327f0062e46\") " pod="openstack/keystone-db-sync-kzh2n" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.869186 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c53d0a2-ccb4-43e8-8a32-e327f0062e46-config-data\") pod \"keystone-db-sync-kzh2n\" (UID: \"8c53d0a2-ccb4-43e8-8a32-e327f0062e46\") " pod="openstack/keystone-db-sync-kzh2n" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.869276 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c45mw\" (UniqueName: \"kubernetes.io/projected/8c53d0a2-ccb4-43e8-8a32-e327f0062e46-kube-api-access-c45mw\") pod \"keystone-db-sync-kzh2n\" (UID: \"8c53d0a2-ccb4-43e8-8a32-e327f0062e46\") " pod="openstack/keystone-db-sync-kzh2n" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.874764 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c53d0a2-ccb4-43e8-8a32-e327f0062e46-combined-ca-bundle\") pod \"keystone-db-sync-kzh2n\" (UID: \"8c53d0a2-ccb4-43e8-8a32-e327f0062e46\") " pod="openstack/keystone-db-sync-kzh2n" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.875322 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c53d0a2-ccb4-43e8-8a32-e327f0062e46-config-data\") pod \"keystone-db-sync-kzh2n\" (UID: \"8c53d0a2-ccb4-43e8-8a32-e327f0062e46\") " pod="openstack/keystone-db-sync-kzh2n" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.891025 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c45mw\" (UniqueName: \"kubernetes.io/projected/8c53d0a2-ccb4-43e8-8a32-e327f0062e46-kube-api-access-c45mw\") pod \"keystone-db-sync-kzh2n\" (UID: \"8c53d0a2-ccb4-43e8-8a32-e327f0062e46\") " pod="openstack/keystone-db-sync-kzh2n" Oct 08 14:40:19 crc kubenswrapper[4624]: I1008 14:40:19.972158 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz6rk\" (UniqueName: \"kubernetes.io/projected/c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c-kube-api-access-tz6rk\") pod \"glance-74e3-account-create-nrtpc\" (UID: \"c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c\") " pod="openstack/glance-74e3-account-create-nrtpc" Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.002011 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kzh2n" Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.073747 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz6rk\" (UniqueName: \"kubernetes.io/projected/c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c-kube-api-access-tz6rk\") pod \"glance-74e3-account-create-nrtpc\" (UID: \"c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c\") " pod="openstack/glance-74e3-account-create-nrtpc" Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.141848 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz6rk\" (UniqueName: \"kubernetes.io/projected/c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c-kube-api-access-tz6rk\") pod \"glance-74e3-account-create-nrtpc\" (UID: \"c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c\") " pod="openstack/glance-74e3-account-create-nrtpc" Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.278491 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.284127 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a6d0a5f4-de63-4141-addf-72f5d787cb24-etc-swift\") pod \"swift-storage-0\" (UID: \"a6d0a5f4-de63-4141-addf-72f5d787cb24\") " pod="openstack/swift-storage-0" Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.439026 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-74e3-account-create-nrtpc" Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.488287 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kzh2n"] Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.520233 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kzh2n" event={"ID":"8c53d0a2-ccb4-43e8-8a32-e327f0062e46","Type":"ContainerStarted","Data":"fb2691f562da13e3db110df68489950591ce808eb78380537cac0c5d725a896e"} Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.544369 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.665244 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-c4zfm" podUID="c5312bac-042b-48c5-bf82-1f565e25f11e" containerName="ovn-controller" probeResult="failure" output=< Oct 08 14:40:20 crc kubenswrapper[4624]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 08 14:40:20 crc kubenswrapper[4624]: > Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.714029 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.753735 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jhpjx" Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.905322 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-74e3-account-create-nrtpc"] Oct 08 14:40:20 crc kubenswrapper[4624]: W1008 14:40:20.914731 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3ffd4ca_b076_4b65_b46f_e60aa6a0d40c.slice/crio-bf3d3f7fc59c426f49b6721508b00e3e4eb66af3e575adbcfa8f03a2ff3d2579 WatchSource:0}: Error finding container bf3d3f7fc59c426f49b6721508b00e3e4eb66af3e575adbcfa8f03a2ff3d2579: Status 404 returned error can't find the container with id bf3d3f7fc59c426f49b6721508b00e3e4eb66af3e575adbcfa8f03a2ff3d2579 Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.967020 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-c4zfm-config-9qbdr"] Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.968676 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.971828 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.973796 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c4zfm-config-9qbdr"] Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.992771 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/25801e3c-6fbe-4100-8380-60f3cf5f9b42-var-run-ovn\") pod \"ovn-controller-c4zfm-config-9qbdr\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.992842 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25801e3c-6fbe-4100-8380-60f3cf5f9b42-scripts\") pod \"ovn-controller-c4zfm-config-9qbdr\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.992906 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/25801e3c-6fbe-4100-8380-60f3cf5f9b42-var-log-ovn\") pod \"ovn-controller-c4zfm-config-9qbdr\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.992936 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/25801e3c-6fbe-4100-8380-60f3cf5f9b42-additional-scripts\") pod \"ovn-controller-c4zfm-config-9qbdr\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.992986 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/25801e3c-6fbe-4100-8380-60f3cf5f9b42-var-run\") pod \"ovn-controller-c4zfm-config-9qbdr\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:20 crc kubenswrapper[4624]: I1008 14:40:20.993011 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqmvw\" (UniqueName: \"kubernetes.io/projected/25801e3c-6fbe-4100-8380-60f3cf5f9b42-kube-api-access-zqmvw\") pod \"ovn-controller-c4zfm-config-9qbdr\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.094068 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/25801e3c-6fbe-4100-8380-60f3cf5f9b42-var-run-ovn\") pod \"ovn-controller-c4zfm-config-9qbdr\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.094270 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25801e3c-6fbe-4100-8380-60f3cf5f9b42-scripts\") pod \"ovn-controller-c4zfm-config-9qbdr\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.094397 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/25801e3c-6fbe-4100-8380-60f3cf5f9b42-var-log-ovn\") pod \"ovn-controller-c4zfm-config-9qbdr\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.094503 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/25801e3c-6fbe-4100-8380-60f3cf5f9b42-additional-scripts\") pod \"ovn-controller-c4zfm-config-9qbdr\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.094676 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/25801e3c-6fbe-4100-8380-60f3cf5f9b42-var-run\") pod \"ovn-controller-c4zfm-config-9qbdr\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.097445 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqmvw\" (UniqueName: \"kubernetes.io/projected/25801e3c-6fbe-4100-8380-60f3cf5f9b42-kube-api-access-zqmvw\") pod \"ovn-controller-c4zfm-config-9qbdr\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.094900 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/25801e3c-6fbe-4100-8380-60f3cf5f9b42-var-run\") pod \"ovn-controller-c4zfm-config-9qbdr\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.094916 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/25801e3c-6fbe-4100-8380-60f3cf5f9b42-var-log-ovn\") pod \"ovn-controller-c4zfm-config-9qbdr\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.095731 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/25801e3c-6fbe-4100-8380-60f3cf5f9b42-additional-scripts\") pod \"ovn-controller-c4zfm-config-9qbdr\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.094858 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/25801e3c-6fbe-4100-8380-60f3cf5f9b42-var-run-ovn\") pod \"ovn-controller-c4zfm-config-9qbdr\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.100179 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25801e3c-6fbe-4100-8380-60f3cf5f9b42-scripts\") pod \"ovn-controller-c4zfm-config-9qbdr\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.120956 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqmvw\" (UniqueName: \"kubernetes.io/projected/25801e3c-6fbe-4100-8380-60f3cf5f9b42-kube-api-access-zqmvw\") pod \"ovn-controller-c4zfm-config-9qbdr\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.396203 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.550926 4624 generic.go:334] "Generic (PLEG): container finished" podID="c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c" containerID="cdcddc55c7b85eb86641f9732aacc969299ac06c416c2ecbfb2d521bac89c650" exitCode=0 Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.551435 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-74e3-account-create-nrtpc" event={"ID":"c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c","Type":"ContainerDied","Data":"cdcddc55c7b85eb86641f9732aacc969299ac06c416c2ecbfb2d521bac89c650"} Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.551490 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-74e3-account-create-nrtpc" event={"ID":"c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c","Type":"ContainerStarted","Data":"bf3d3f7fc59c426f49b6721508b00e3e4eb66af3e575adbcfa8f03a2ff3d2579"} Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.632764 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-1b7f-account-create-2cp9t"] Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.633984 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1b7f-account-create-2cp9t" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.637206 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1b7f-account-create-2cp9t"] Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.640289 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.712657 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.723073 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qltkr\" (UniqueName: \"kubernetes.io/projected/7a5621c9-5bfe-41bc-bcbc-278553832e31-kube-api-access-qltkr\") pod \"barbican-1b7f-account-create-2cp9t\" (UID: \"7a5621c9-5bfe-41bc-bcbc-278553832e31\") " pod="openstack/barbican-1b7f-account-create-2cp9t" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.824714 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qltkr\" (UniqueName: \"kubernetes.io/projected/7a5621c9-5bfe-41bc-bcbc-278553832e31-kube-api-access-qltkr\") pod \"barbican-1b7f-account-create-2cp9t\" (UID: \"7a5621c9-5bfe-41bc-bcbc-278553832e31\") " pod="openstack/barbican-1b7f-account-create-2cp9t" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.844379 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qltkr\" (UniqueName: \"kubernetes.io/projected/7a5621c9-5bfe-41bc-bcbc-278553832e31-kube-api-access-qltkr\") pod \"barbican-1b7f-account-create-2cp9t\" (UID: \"7a5621c9-5bfe-41bc-bcbc-278553832e31\") " pod="openstack/barbican-1b7f-account-create-2cp9t" Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.920120 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c4zfm-config-9qbdr"] Oct 08 14:40:21 crc kubenswrapper[4624]: W1008 14:40:21.932085 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25801e3c_6fbe_4100_8380_60f3cf5f9b42.slice/crio-ef6716dde3bfafc0ee6606146ea895690285f28af74ffc9cf717b2f8b470e434 WatchSource:0}: Error finding container ef6716dde3bfafc0ee6606146ea895690285f28af74ffc9cf717b2f8b470e434: Status 404 returned error can't find the container with id ef6716dde3bfafc0ee6606146ea895690285f28af74ffc9cf717b2f8b470e434 Oct 08 14:40:21 crc kubenswrapper[4624]: I1008 14:40:21.974121 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1b7f-account-create-2cp9t" Oct 08 14:40:22 crc kubenswrapper[4624]: I1008 14:40:22.399786 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1b7f-account-create-2cp9t"] Oct 08 14:40:22 crc kubenswrapper[4624]: I1008 14:40:22.571565 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6d0a5f4-de63-4141-addf-72f5d787cb24","Type":"ContainerStarted","Data":"5975136ba7802c9128ca46396ebf34c8efb63068e9774b6379fea0ddcbc7d879"} Oct 08 14:40:22 crc kubenswrapper[4624]: I1008 14:40:22.574387 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c4zfm-config-9qbdr" event={"ID":"25801e3c-6fbe-4100-8380-60f3cf5f9b42","Type":"ContainerStarted","Data":"ef6716dde3bfafc0ee6606146ea895690285f28af74ffc9cf717b2f8b470e434"} Oct 08 14:40:23 crc kubenswrapper[4624]: I1008 14:40:23.589525 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1b7f-account-create-2cp9t" event={"ID":"7a5621c9-5bfe-41bc-bcbc-278553832e31","Type":"ContainerStarted","Data":"c15a0cf3bede2822f2c91a6a6927bb3f6895a50c0ba0fb14fd6c7963d630a659"} Oct 08 14:40:23 crc kubenswrapper[4624]: I1008 14:40:23.592058 4624 generic.go:334] "Generic (PLEG): container finished" podID="25801e3c-6fbe-4100-8380-60f3cf5f9b42" containerID="e4310411121c32c4933e1fa96560ab8608953e013ea1775c32d84e18c8aa56f3" exitCode=0 Oct 08 14:40:23 crc kubenswrapper[4624]: I1008 14:40:23.592097 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c4zfm-config-9qbdr" event={"ID":"25801e3c-6fbe-4100-8380-60f3cf5f9b42","Type":"ContainerDied","Data":"e4310411121c32c4933e1fa96560ab8608953e013ea1775c32d84e18c8aa56f3"} Oct 08 14:40:25 crc kubenswrapper[4624]: I1008 14:40:25.633679 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-c4zfm" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.239810 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.296458 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-74e3-account-create-nrtpc" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.413538 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/25801e3c-6fbe-4100-8380-60f3cf5f9b42-var-run\") pod \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.413626 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqmvw\" (UniqueName: \"kubernetes.io/projected/25801e3c-6fbe-4100-8380-60f3cf5f9b42-kube-api-access-zqmvw\") pod \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.413831 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/25801e3c-6fbe-4100-8380-60f3cf5f9b42-var-run-ovn\") pod \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.413918 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz6rk\" (UniqueName: \"kubernetes.io/projected/c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c-kube-api-access-tz6rk\") pod \"c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c\" (UID: \"c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c\") " Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.414009 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/25801e3c-6fbe-4100-8380-60f3cf5f9b42-var-log-ovn\") pod \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.414083 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/25801e3c-6fbe-4100-8380-60f3cf5f9b42-additional-scripts\") pod \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.414148 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25801e3c-6fbe-4100-8380-60f3cf5f9b42-scripts\") pod \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\" (UID: \"25801e3c-6fbe-4100-8380-60f3cf5f9b42\") " Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.415132 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25801e3c-6fbe-4100-8380-60f3cf5f9b42-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "25801e3c-6fbe-4100-8380-60f3cf5f9b42" (UID: "25801e3c-6fbe-4100-8380-60f3cf5f9b42"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.415240 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25801e3c-6fbe-4100-8380-60f3cf5f9b42-var-run" (OuterVolumeSpecName: "var-run") pod "25801e3c-6fbe-4100-8380-60f3cf5f9b42" (UID: "25801e3c-6fbe-4100-8380-60f3cf5f9b42"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.415722 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25801e3c-6fbe-4100-8380-60f3cf5f9b42-scripts" (OuterVolumeSpecName: "scripts") pod "25801e3c-6fbe-4100-8380-60f3cf5f9b42" (UID: "25801e3c-6fbe-4100-8380-60f3cf5f9b42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.416059 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25801e3c-6fbe-4100-8380-60f3cf5f9b42-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "25801e3c-6fbe-4100-8380-60f3cf5f9b42" (UID: "25801e3c-6fbe-4100-8380-60f3cf5f9b42"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.416397 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25801e3c-6fbe-4100-8380-60f3cf5f9b42-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "25801e3c-6fbe-4100-8380-60f3cf5f9b42" (UID: "25801e3c-6fbe-4100-8380-60f3cf5f9b42"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.419189 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c-kube-api-access-tz6rk" (OuterVolumeSpecName: "kube-api-access-tz6rk") pod "c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c" (UID: "c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c"). InnerVolumeSpecName "kube-api-access-tz6rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.419731 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25801e3c-6fbe-4100-8380-60f3cf5f9b42-kube-api-access-zqmvw" (OuterVolumeSpecName: "kube-api-access-zqmvw") pod "25801e3c-6fbe-4100-8380-60f3cf5f9b42" (UID: "25801e3c-6fbe-4100-8380-60f3cf5f9b42"). InnerVolumeSpecName "kube-api-access-zqmvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.516585 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqmvw\" (UniqueName: \"kubernetes.io/projected/25801e3c-6fbe-4100-8380-60f3cf5f9b42-kube-api-access-zqmvw\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.516619 4624 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/25801e3c-6fbe-4100-8380-60f3cf5f9b42-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.516646 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz6rk\" (UniqueName: \"kubernetes.io/projected/c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c-kube-api-access-tz6rk\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.516658 4624 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/25801e3c-6fbe-4100-8380-60f3cf5f9b42-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.516669 4624 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/25801e3c-6fbe-4100-8380-60f3cf5f9b42-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.516679 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25801e3c-6fbe-4100-8380-60f3cf5f9b42-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.516688 4624 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/25801e3c-6fbe-4100-8380-60f3cf5f9b42-var-run\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.618629 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6d0a5f4-de63-4141-addf-72f5d787cb24","Type":"ContainerStarted","Data":"748095813acbad7ef0b042a741e8fb7644db911e85be6048b9138b0f94c0c47c"} Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.618910 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6d0a5f4-de63-4141-addf-72f5d787cb24","Type":"ContainerStarted","Data":"58bdac217dc0198b973cd3bde7081606d2d99fa89a495a8c962af40f6e9ef8f5"} Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.621611 4624 generic.go:334] "Generic (PLEG): container finished" podID="7a5621c9-5bfe-41bc-bcbc-278553832e31" containerID="25bf8d7723626d5e4fcccd5f63c9cbcfc263b7f17579ee59eed6f1739e4b9bbd" exitCode=0 Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.621790 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1b7f-account-create-2cp9t" event={"ID":"7a5621c9-5bfe-41bc-bcbc-278553832e31","Type":"ContainerDied","Data":"25bf8d7723626d5e4fcccd5f63c9cbcfc263b7f17579ee59eed6f1739e4b9bbd"} Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.624492 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-74e3-account-create-nrtpc" event={"ID":"c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c","Type":"ContainerDied","Data":"bf3d3f7fc59c426f49b6721508b00e3e4eb66af3e575adbcfa8f03a2ff3d2579"} Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.624526 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf3d3f7fc59c426f49b6721508b00e3e4eb66af3e575adbcfa8f03a2ff3d2579" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.624580 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-74e3-account-create-nrtpc" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.632916 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4zfm-config-9qbdr" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.633911 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c4zfm-config-9qbdr" event={"ID":"25801e3c-6fbe-4100-8380-60f3cf5f9b42","Type":"ContainerDied","Data":"ef6716dde3bfafc0ee6606146ea895690285f28af74ffc9cf717b2f8b470e434"} Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.633961 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef6716dde3bfafc0ee6606146ea895690285f28af74ffc9cf717b2f8b470e434" Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.651873 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kzh2n" event={"ID":"8c53d0a2-ccb4-43e8-8a32-e327f0062e46","Type":"ContainerStarted","Data":"6e5c7b31db779b33f078cd1400bcee9d48e55d7b8e1329467adc1665529e7de0"} Oct 08 14:40:26 crc kubenswrapper[4624]: I1008 14:40:26.673684 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-kzh2n" podStartSLOduration=2.178027215 podStartE2EDuration="7.673628889s" podCreationTimestamp="2025-10-08 14:40:19 +0000 UTC" firstStartedPulling="2025-10-08 14:40:20.509828686 +0000 UTC m=+1045.660763763" lastFinishedPulling="2025-10-08 14:40:26.00543036 +0000 UTC m=+1051.156365437" observedRunningTime="2025-10-08 14:40:26.667398192 +0000 UTC m=+1051.818333269" watchObservedRunningTime="2025-10-08 14:40:26.673628889 +0000 UTC m=+1051.824563966" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.356777 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-c4zfm-config-9qbdr"] Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.368071 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-c4zfm-config-9qbdr"] Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.443430 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-c4zfm-config-4z5pw"] Oct 08 14:40:27 crc kubenswrapper[4624]: E1008 14:40:27.443994 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c" containerName="mariadb-account-create" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.444022 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c" containerName="mariadb-account-create" Oct 08 14:40:27 crc kubenswrapper[4624]: E1008 14:40:27.444053 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25801e3c-6fbe-4100-8380-60f3cf5f9b42" containerName="ovn-config" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.444062 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="25801e3c-6fbe-4100-8380-60f3cf5f9b42" containerName="ovn-config" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.444258 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="25801e3c-6fbe-4100-8380-60f3cf5f9b42" containerName="ovn-config" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.444434 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c" containerName="mariadb-account-create" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.445121 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.454902 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.457454 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c4zfm-config-4z5pw"] Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.488675 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25801e3c-6fbe-4100-8380-60f3cf5f9b42" path="/var/lib/kubelet/pods/25801e3c-6fbe-4100-8380-60f3cf5f9b42/volumes" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.552418 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7c100d95-e9a3-4731-8241-41ec45b0d9f9-additional-scripts\") pod \"ovn-controller-c4zfm-config-4z5pw\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.552478 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c100d95-e9a3-4731-8241-41ec45b0d9f9-var-run\") pod \"ovn-controller-c4zfm-config-4z5pw\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.552702 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c100d95-e9a3-4731-8241-41ec45b0d9f9-scripts\") pod \"ovn-controller-c4zfm-config-4z5pw\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.552793 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c100d95-e9a3-4731-8241-41ec45b0d9f9-var-log-ovn\") pod \"ovn-controller-c4zfm-config-4z5pw\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.552848 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c100d95-e9a3-4731-8241-41ec45b0d9f9-var-run-ovn\") pod \"ovn-controller-c4zfm-config-4z5pw\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.552889 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwzj\" (UniqueName: \"kubernetes.io/projected/7c100d95-e9a3-4731-8241-41ec45b0d9f9-kube-api-access-gfwzj\") pod \"ovn-controller-c4zfm-config-4z5pw\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.654589 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7c100d95-e9a3-4731-8241-41ec45b0d9f9-additional-scripts\") pod \"ovn-controller-c4zfm-config-4z5pw\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.654661 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c100d95-e9a3-4731-8241-41ec45b0d9f9-var-run\") pod \"ovn-controller-c4zfm-config-4z5pw\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.654699 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c100d95-e9a3-4731-8241-41ec45b0d9f9-scripts\") pod \"ovn-controller-c4zfm-config-4z5pw\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.654783 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c100d95-e9a3-4731-8241-41ec45b0d9f9-var-log-ovn\") pod \"ovn-controller-c4zfm-config-4z5pw\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.654831 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c100d95-e9a3-4731-8241-41ec45b0d9f9-var-run-ovn\") pod \"ovn-controller-c4zfm-config-4z5pw\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.654867 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfwzj\" (UniqueName: \"kubernetes.io/projected/7c100d95-e9a3-4731-8241-41ec45b0d9f9-kube-api-access-gfwzj\") pod \"ovn-controller-c4zfm-config-4z5pw\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.655063 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c100d95-e9a3-4731-8241-41ec45b0d9f9-var-run-ovn\") pod \"ovn-controller-c4zfm-config-4z5pw\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.655063 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c100d95-e9a3-4731-8241-41ec45b0d9f9-var-log-ovn\") pod \"ovn-controller-c4zfm-config-4z5pw\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.655106 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c100d95-e9a3-4731-8241-41ec45b0d9f9-var-run\") pod \"ovn-controller-c4zfm-config-4z5pw\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.655474 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7c100d95-e9a3-4731-8241-41ec45b0d9f9-additional-scripts\") pod \"ovn-controller-c4zfm-config-4z5pw\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.657138 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c100d95-e9a3-4731-8241-41ec45b0d9f9-scripts\") pod \"ovn-controller-c4zfm-config-4z5pw\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.665653 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6d0a5f4-de63-4141-addf-72f5d787cb24","Type":"ContainerStarted","Data":"648305ac0d15fa7d8c536b75ed37bee24802540e38be1499e389235b97f6958f"} Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.665704 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6d0a5f4-de63-4141-addf-72f5d787cb24","Type":"ContainerStarted","Data":"60beef645188f6e0b3d92ce4458e9792f7d80fe2f2c640184b618317d5f59aa0"} Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.676712 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfwzj\" (UniqueName: \"kubernetes.io/projected/7c100d95-e9a3-4731-8241-41ec45b0d9f9-kube-api-access-gfwzj\") pod \"ovn-controller-c4zfm-config-4z5pw\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:27 crc kubenswrapper[4624]: I1008 14:40:27.761408 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:28 crc kubenswrapper[4624]: I1008 14:40:28.159614 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1b7f-account-create-2cp9t" Oct 08 14:40:28 crc kubenswrapper[4624]: I1008 14:40:28.286357 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qltkr\" (UniqueName: \"kubernetes.io/projected/7a5621c9-5bfe-41bc-bcbc-278553832e31-kube-api-access-qltkr\") pod \"7a5621c9-5bfe-41bc-bcbc-278553832e31\" (UID: \"7a5621c9-5bfe-41bc-bcbc-278553832e31\") " Oct 08 14:40:28 crc kubenswrapper[4624]: I1008 14:40:28.296875 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5621c9-5bfe-41bc-bcbc-278553832e31-kube-api-access-qltkr" (OuterVolumeSpecName: "kube-api-access-qltkr") pod "7a5621c9-5bfe-41bc-bcbc-278553832e31" (UID: "7a5621c9-5bfe-41bc-bcbc-278553832e31"). InnerVolumeSpecName "kube-api-access-qltkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:28 crc kubenswrapper[4624]: I1008 14:40:28.304475 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c4zfm-config-4z5pw"] Oct 08 14:40:28 crc kubenswrapper[4624]: W1008 14:40:28.314224 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c100d95_e9a3_4731_8241_41ec45b0d9f9.slice/crio-66ac21c1caa0c55565b90ac481820356808f1ef18f7565b5d505c54914dea0a6 WatchSource:0}: Error finding container 66ac21c1caa0c55565b90ac481820356808f1ef18f7565b5d505c54914dea0a6: Status 404 returned error can't find the container with id 66ac21c1caa0c55565b90ac481820356808f1ef18f7565b5d505c54914dea0a6 Oct 08 14:40:28 crc kubenswrapper[4624]: I1008 14:40:28.389182 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qltkr\" (UniqueName: \"kubernetes.io/projected/7a5621c9-5bfe-41bc-bcbc-278553832e31-kube-api-access-qltkr\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:28 crc kubenswrapper[4624]: I1008 14:40:28.713439 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6d0a5f4-de63-4141-addf-72f5d787cb24","Type":"ContainerStarted","Data":"de01ec1d8b1fbf329d7ae81e0f7ec293eead631e9ea0f69279bf1ee21bd7c2ca"} Oct 08 14:40:28 crc kubenswrapper[4624]: I1008 14:40:28.713487 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6d0a5f4-de63-4141-addf-72f5d787cb24","Type":"ContainerStarted","Data":"aaa0816c62532e516680ab497cdae19437f7044e2b8a63fe34cf101577281fdd"} Oct 08 14:40:28 crc kubenswrapper[4624]: I1008 14:40:28.713497 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6d0a5f4-de63-4141-addf-72f5d787cb24","Type":"ContainerStarted","Data":"7a84200dd3ecb1dd3c3f481693d0a482029e887ea4443217a3789be03aa1a823"} Oct 08 14:40:28 crc kubenswrapper[4624]: I1008 14:40:28.718289 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c4zfm-config-4z5pw" event={"ID":"7c100d95-e9a3-4731-8241-41ec45b0d9f9","Type":"ContainerStarted","Data":"747bb2a66c6abfeec5017b98427228c9c6eb11c578ccf049ee51f0b86b042206"} Oct 08 14:40:28 crc kubenswrapper[4624]: I1008 14:40:28.718322 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c4zfm-config-4z5pw" event={"ID":"7c100d95-e9a3-4731-8241-41ec45b0d9f9","Type":"ContainerStarted","Data":"66ac21c1caa0c55565b90ac481820356808f1ef18f7565b5d505c54914dea0a6"} Oct 08 14:40:28 crc kubenswrapper[4624]: I1008 14:40:28.754507 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1b7f-account-create-2cp9t" event={"ID":"7a5621c9-5bfe-41bc-bcbc-278553832e31","Type":"ContainerDied","Data":"c15a0cf3bede2822f2c91a6a6927bb3f6895a50c0ba0fb14fd6c7963d630a659"} Oct 08 14:40:28 crc kubenswrapper[4624]: I1008 14:40:28.754548 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c15a0cf3bede2822f2c91a6a6927bb3f6895a50c0ba0fb14fd6c7963d630a659" Oct 08 14:40:28 crc kubenswrapper[4624]: I1008 14:40:28.754605 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1b7f-account-create-2cp9t" Oct 08 14:40:28 crc kubenswrapper[4624]: E1008 14:40:28.975600 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a5621c9_5bfe_41bc_bcbc_278553832e31.slice/crio-c15a0cf3bede2822f2c91a6a6927bb3f6895a50c0ba0fb14fd6c7963d630a659\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c100d95_e9a3_4731_8241_41ec45b0d9f9.slice/crio-conmon-747bb2a66c6abfeec5017b98427228c9c6eb11c578ccf049ee51f0b86b042206.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a5621c9_5bfe_41bc_bcbc_278553832e31.slice\": RecentStats: unable to find data in memory cache]" Oct 08 14:40:29 crc kubenswrapper[4624]: I1008 14:40:29.193435 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-c4zfm-config-4z5pw" podStartSLOduration=2.193413357 podStartE2EDuration="2.193413357s" podCreationTimestamp="2025-10-08 14:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:40:28.749990576 +0000 UTC m=+1053.900925653" watchObservedRunningTime="2025-10-08 14:40:29.193413357 +0000 UTC m=+1054.344348434" Oct 08 14:40:29 crc kubenswrapper[4624]: I1008 14:40:29.762554 4624 generic.go:334] "Generic (PLEG): container finished" podID="7c100d95-e9a3-4731-8241-41ec45b0d9f9" containerID="747bb2a66c6abfeec5017b98427228c9c6eb11c578ccf049ee51f0b86b042206" exitCode=0 Oct 08 14:40:29 crc kubenswrapper[4624]: I1008 14:40:29.762787 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c4zfm-config-4z5pw" event={"ID":"7c100d95-e9a3-4731-8241-41ec45b0d9f9","Type":"ContainerDied","Data":"747bb2a66c6abfeec5017b98427228c9c6eb11c578ccf049ee51f0b86b042206"} Oct 08 14:40:29 crc kubenswrapper[4624]: I1008 14:40:29.773924 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6d0a5f4-de63-4141-addf-72f5d787cb24","Type":"ContainerStarted","Data":"ed372a9d90154dcb30599c9ddcecd8e04bce816a75caa8d753f9f50579ce166b"} Oct 08 14:40:29 crc kubenswrapper[4624]: I1008 14:40:29.773969 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6d0a5f4-de63-4141-addf-72f5d787cb24","Type":"ContainerStarted","Data":"47b45da110059e95d35728a944b09db779e076a2cba7bd06ab6e6b704585108a"} Oct 08 14:40:29 crc kubenswrapper[4624]: I1008 14:40:29.773986 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6d0a5f4-de63-4141-addf-72f5d787cb24","Type":"ContainerStarted","Data":"a7cf31088f678d78a9d1ea5a1869c7f0855cfe49c58ba7a840e10815faf6c133"} Oct 08 14:40:29 crc kubenswrapper[4624]: I1008 14:40:29.985461 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-t4www"] Oct 08 14:40:29 crc kubenswrapper[4624]: E1008 14:40:29.985918 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5621c9-5bfe-41bc-bcbc-278553832e31" containerName="mariadb-account-create" Oct 08 14:40:29 crc kubenswrapper[4624]: I1008 14:40:29.985941 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5621c9-5bfe-41bc-bcbc-278553832e31" containerName="mariadb-account-create" Oct 08 14:40:29 crc kubenswrapper[4624]: I1008 14:40:29.986131 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a5621c9-5bfe-41bc-bcbc-278553832e31" containerName="mariadb-account-create" Oct 08 14:40:29 crc kubenswrapper[4624]: I1008 14:40:29.987206 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-t4www" Oct 08 14:40:29 crc kubenswrapper[4624]: I1008 14:40:29.989756 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 08 14:40:29 crc kubenswrapper[4624]: I1008 14:40:29.989965 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-9j69m" Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.004346 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-t4www"] Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.131646 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7222761d-c17c-485d-a672-75d7921fbb20-combined-ca-bundle\") pod \"glance-db-sync-t4www\" (UID: \"7222761d-c17c-485d-a672-75d7921fbb20\") " pod="openstack/glance-db-sync-t4www" Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.131728 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdrdw\" (UniqueName: \"kubernetes.io/projected/7222761d-c17c-485d-a672-75d7921fbb20-kube-api-access-vdrdw\") pod \"glance-db-sync-t4www\" (UID: \"7222761d-c17c-485d-a672-75d7921fbb20\") " pod="openstack/glance-db-sync-t4www" Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.131756 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7222761d-c17c-485d-a672-75d7921fbb20-db-sync-config-data\") pod \"glance-db-sync-t4www\" (UID: \"7222761d-c17c-485d-a672-75d7921fbb20\") " pod="openstack/glance-db-sync-t4www" Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.132306 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7222761d-c17c-485d-a672-75d7921fbb20-config-data\") pod \"glance-db-sync-t4www\" (UID: \"7222761d-c17c-485d-a672-75d7921fbb20\") " pod="openstack/glance-db-sync-t4www" Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.233509 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7222761d-c17c-485d-a672-75d7921fbb20-combined-ca-bundle\") pod \"glance-db-sync-t4www\" (UID: \"7222761d-c17c-485d-a672-75d7921fbb20\") " pod="openstack/glance-db-sync-t4www" Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.233563 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdrdw\" (UniqueName: \"kubernetes.io/projected/7222761d-c17c-485d-a672-75d7921fbb20-kube-api-access-vdrdw\") pod \"glance-db-sync-t4www\" (UID: \"7222761d-c17c-485d-a672-75d7921fbb20\") " pod="openstack/glance-db-sync-t4www" Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.233594 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7222761d-c17c-485d-a672-75d7921fbb20-db-sync-config-data\") pod \"glance-db-sync-t4www\" (UID: \"7222761d-c17c-485d-a672-75d7921fbb20\") " pod="openstack/glance-db-sync-t4www" Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.233663 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7222761d-c17c-485d-a672-75d7921fbb20-config-data\") pod \"glance-db-sync-t4www\" (UID: \"7222761d-c17c-485d-a672-75d7921fbb20\") " pod="openstack/glance-db-sync-t4www" Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.241542 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7222761d-c17c-485d-a672-75d7921fbb20-config-data\") pod \"glance-db-sync-t4www\" (UID: \"7222761d-c17c-485d-a672-75d7921fbb20\") " pod="openstack/glance-db-sync-t4www" Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.242050 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7222761d-c17c-485d-a672-75d7921fbb20-combined-ca-bundle\") pod \"glance-db-sync-t4www\" (UID: \"7222761d-c17c-485d-a672-75d7921fbb20\") " pod="openstack/glance-db-sync-t4www" Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.243724 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7222761d-c17c-485d-a672-75d7921fbb20-db-sync-config-data\") pod \"glance-db-sync-t4www\" (UID: \"7222761d-c17c-485d-a672-75d7921fbb20\") " pod="openstack/glance-db-sync-t4www" Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.266478 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdrdw\" (UniqueName: \"kubernetes.io/projected/7222761d-c17c-485d-a672-75d7921fbb20-kube-api-access-vdrdw\") pod \"glance-db-sync-t4www\" (UID: \"7222761d-c17c-485d-a672-75d7921fbb20\") " pod="openstack/glance-db-sync-t4www" Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.354299 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-t4www" Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.785343 4624 generic.go:334] "Generic (PLEG): container finished" podID="8c53d0a2-ccb4-43e8-8a32-e327f0062e46" containerID="6e5c7b31db779b33f078cd1400bcee9d48e55d7b8e1329467adc1665529e7de0" exitCode=0 Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.785486 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kzh2n" event={"ID":"8c53d0a2-ccb4-43e8-8a32-e327f0062e46","Type":"ContainerDied","Data":"6e5c7b31db779b33f078cd1400bcee9d48e55d7b8e1329467adc1665529e7de0"} Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.808628 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6d0a5f4-de63-4141-addf-72f5d787cb24","Type":"ContainerStarted","Data":"d5a83d23ef11c813771101a44a811f33d58d84763f6b77cb8277c147199be608"} Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.808676 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6d0a5f4-de63-4141-addf-72f5d787cb24","Type":"ContainerStarted","Data":"acb622bb67299f7783f6f2f0fa2ec8bd989e4085f8a466fea92bda8520ae5f75"} Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.808685 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6d0a5f4-de63-4141-addf-72f5d787cb24","Type":"ContainerStarted","Data":"b35bb3dafa8525240bdb80e21913454560828f6c39a7980eb15f8e026eeb2ee1"} Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.808694 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6d0a5f4-de63-4141-addf-72f5d787cb24","Type":"ContainerStarted","Data":"7d9bb06b1b4ce50cea987f521a7e33015b519d61b0514e291053791b1ed2220d"} Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.808701 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a6d0a5f4-de63-4141-addf-72f5d787cb24","Type":"ContainerStarted","Data":"880f359902b690fbd6d6d47b753bfc769d5c8cd0678ad87ccb3171ab577a1db4"} Oct 08 14:40:30 crc kubenswrapper[4624]: I1008 14:40:30.855946 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.290156433 podStartE2EDuration="43.855919217s" podCreationTimestamp="2025-10-08 14:39:47 +0000 UTC" firstStartedPulling="2025-10-08 14:40:21.729365728 +0000 UTC m=+1046.880300805" lastFinishedPulling="2025-10-08 14:40:29.295128512 +0000 UTC m=+1054.446063589" observedRunningTime="2025-10-08 14:40:30.843403522 +0000 UTC m=+1055.994338609" watchObservedRunningTime="2025-10-08 14:40:30.855919217 +0000 UTC m=+1056.006854294" Oct 08 14:40:31 crc kubenswrapper[4624]: W1008 14:40:31.026601 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7222761d_c17c_485d_a672_75d7921fbb20.slice/crio-59709d529eaba342635c2dd541195964316bd7d826ba3b19f11dbfa654de9953 WatchSource:0}: Error finding container 59709d529eaba342635c2dd541195964316bd7d826ba3b19f11dbfa654de9953: Status 404 returned error can't find the container with id 59709d529eaba342635c2dd541195964316bd7d826ba3b19f11dbfa654de9953 Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.033613 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-t4www"] Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.152569 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5988746689-hdfjh"] Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.157019 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.162349 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.181949 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5988746689-hdfjh"] Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.182545 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.250769 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngx6k\" (UniqueName: \"kubernetes.io/projected/64838efa-26a3-4de6-aec8-ccf4457b41d1-kube-api-access-ngx6k\") pod \"dnsmasq-dns-5988746689-hdfjh\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.251252 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-dns-swift-storage-0\") pod \"dnsmasq-dns-5988746689-hdfjh\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.251422 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-config\") pod \"dnsmasq-dns-5988746689-hdfjh\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.251623 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-dns-svc\") pod \"dnsmasq-dns-5988746689-hdfjh\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.251744 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-ovsdbserver-sb\") pod \"dnsmasq-dns-5988746689-hdfjh\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.251844 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-ovsdbserver-nb\") pod \"dnsmasq-dns-5988746689-hdfjh\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.353359 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c100d95-e9a3-4731-8241-41ec45b0d9f9-var-log-ovn\") pod \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.353495 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c100d95-e9a3-4731-8241-41ec45b0d9f9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7c100d95-e9a3-4731-8241-41ec45b0d9f9" (UID: "7c100d95-e9a3-4731-8241-41ec45b0d9f9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.353910 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfwzj\" (UniqueName: \"kubernetes.io/projected/7c100d95-e9a3-4731-8241-41ec45b0d9f9-kube-api-access-gfwzj\") pod \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.353947 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c100d95-e9a3-4731-8241-41ec45b0d9f9-var-run-ovn\") pod \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.354060 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c100d95-e9a3-4731-8241-41ec45b0d9f9-var-run\") pod \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.354093 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7c100d95-e9a3-4731-8241-41ec45b0d9f9-additional-scripts\") pod \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.354112 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c100d95-e9a3-4731-8241-41ec45b0d9f9-scripts\") pod \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\" (UID: \"7c100d95-e9a3-4731-8241-41ec45b0d9f9\") " Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.354380 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-dns-svc\") pod \"dnsmasq-dns-5988746689-hdfjh\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.354445 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-ovsdbserver-sb\") pod \"dnsmasq-dns-5988746689-hdfjh\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.354545 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-ovsdbserver-nb\") pod \"dnsmasq-dns-5988746689-hdfjh\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.354586 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngx6k\" (UniqueName: \"kubernetes.io/projected/64838efa-26a3-4de6-aec8-ccf4457b41d1-kube-api-access-ngx6k\") pod \"dnsmasq-dns-5988746689-hdfjh\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.354609 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-dns-swift-storage-0\") pod \"dnsmasq-dns-5988746689-hdfjh\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.354662 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-config\") pod \"dnsmasq-dns-5988746689-hdfjh\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.354729 4624 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c100d95-e9a3-4731-8241-41ec45b0d9f9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.354878 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c100d95-e9a3-4731-8241-41ec45b0d9f9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7c100d95-e9a3-4731-8241-41ec45b0d9f9" (UID: "7c100d95-e9a3-4731-8241-41ec45b0d9f9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.354389 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c100d95-e9a3-4731-8241-41ec45b0d9f9-var-run" (OuterVolumeSpecName: "var-run") pod "7c100d95-e9a3-4731-8241-41ec45b0d9f9" (UID: "7c100d95-e9a3-4731-8241-41ec45b0d9f9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.355723 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c100d95-e9a3-4731-8241-41ec45b0d9f9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7c100d95-e9a3-4731-8241-41ec45b0d9f9" (UID: "7c100d95-e9a3-4731-8241-41ec45b0d9f9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.355944 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c100d95-e9a3-4731-8241-41ec45b0d9f9-scripts" (OuterVolumeSpecName: "scripts") pod "7c100d95-e9a3-4731-8241-41ec45b0d9f9" (UID: "7c100d95-e9a3-4731-8241-41ec45b0d9f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.355739 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-config\") pod \"dnsmasq-dns-5988746689-hdfjh\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.356441 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-dns-svc\") pod \"dnsmasq-dns-5988746689-hdfjh\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.355744 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-ovsdbserver-nb\") pod \"dnsmasq-dns-5988746689-hdfjh\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.357522 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-dns-swift-storage-0\") pod \"dnsmasq-dns-5988746689-hdfjh\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.358417 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-ovsdbserver-sb\") pod \"dnsmasq-dns-5988746689-hdfjh\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.380391 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c100d95-e9a3-4731-8241-41ec45b0d9f9-kube-api-access-gfwzj" (OuterVolumeSpecName: "kube-api-access-gfwzj") pod "7c100d95-e9a3-4731-8241-41ec45b0d9f9" (UID: "7c100d95-e9a3-4731-8241-41ec45b0d9f9"). InnerVolumeSpecName "kube-api-access-gfwzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.389710 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-c4zfm-config-4z5pw"] Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.393823 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngx6k\" (UniqueName: \"kubernetes.io/projected/64838efa-26a3-4de6-aec8-ccf4457b41d1-kube-api-access-ngx6k\") pod \"dnsmasq-dns-5988746689-hdfjh\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.399905 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-c4zfm-config-4z5pw"] Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.456242 4624 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c100d95-e9a3-4731-8241-41ec45b0d9f9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.456291 4624 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c100d95-e9a3-4731-8241-41ec45b0d9f9-var-run\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.456304 4624 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7c100d95-e9a3-4731-8241-41ec45b0d9f9-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.456317 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c100d95-e9a3-4731-8241-41ec45b0d9f9-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.456329 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfwzj\" (UniqueName: \"kubernetes.io/projected/7c100d95-e9a3-4731-8241-41ec45b0d9f9-kube-api-access-gfwzj\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.479482 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c100d95-e9a3-4731-8241-41ec45b0d9f9" path="/var/lib/kubelet/pods/7c100d95-e9a3-4731-8241-41ec45b0d9f9/volumes" Oct 08 14:40:31 crc kubenswrapper[4624]: I1008 14:40:31.495300 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:32 crc kubenswrapper[4624]: I1008 14:40:31.830916 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4zfm-config-4z5pw" Oct 08 14:40:32 crc kubenswrapper[4624]: I1008 14:40:31.830915 4624 scope.go:117] "RemoveContainer" containerID="747bb2a66c6abfeec5017b98427228c9c6eb11c578ccf049ee51f0b86b042206" Oct 08 14:40:32 crc kubenswrapper[4624]: I1008 14:40:31.834118 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-t4www" event={"ID":"7222761d-c17c-485d-a672-75d7921fbb20","Type":"ContainerStarted","Data":"59709d529eaba342635c2dd541195964316bd7d826ba3b19f11dbfa654de9953"} Oct 08 14:40:32 crc kubenswrapper[4624]: I1008 14:40:32.017019 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5988746689-hdfjh"] Oct 08 14:40:32 crc kubenswrapper[4624]: I1008 14:40:32.814546 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kzh2n" Oct 08 14:40:32 crc kubenswrapper[4624]: I1008 14:40:32.851120 4624 generic.go:334] "Generic (PLEG): container finished" podID="64838efa-26a3-4de6-aec8-ccf4457b41d1" containerID="01d161730abc9d9551c983257f9448f51a7e441621a2a10b68a52b36f866c96f" exitCode=0 Oct 08 14:40:32 crc kubenswrapper[4624]: I1008 14:40:32.851200 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5988746689-hdfjh" event={"ID":"64838efa-26a3-4de6-aec8-ccf4457b41d1","Type":"ContainerDied","Data":"01d161730abc9d9551c983257f9448f51a7e441621a2a10b68a52b36f866c96f"} Oct 08 14:40:32 crc kubenswrapper[4624]: I1008 14:40:32.851240 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5988746689-hdfjh" event={"ID":"64838efa-26a3-4de6-aec8-ccf4457b41d1","Type":"ContainerStarted","Data":"e30b32cfa84891f9aae5bd0a309ae25a2873ce7c7499c5cdc8c7f9b792d6dcee"} Oct 08 14:40:32 crc kubenswrapper[4624]: I1008 14:40:32.856415 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kzh2n" event={"ID":"8c53d0a2-ccb4-43e8-8a32-e327f0062e46","Type":"ContainerDied","Data":"fb2691f562da13e3db110df68489950591ce808eb78380537cac0c5d725a896e"} Oct 08 14:40:32 crc kubenswrapper[4624]: I1008 14:40:32.856471 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb2691f562da13e3db110df68489950591ce808eb78380537cac0c5d725a896e" Oct 08 14:40:32 crc kubenswrapper[4624]: I1008 14:40:32.856547 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kzh2n" Oct 08 14:40:32 crc kubenswrapper[4624]: I1008 14:40:32.989603 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c53d0a2-ccb4-43e8-8a32-e327f0062e46-combined-ca-bundle\") pod \"8c53d0a2-ccb4-43e8-8a32-e327f0062e46\" (UID: \"8c53d0a2-ccb4-43e8-8a32-e327f0062e46\") " Oct 08 14:40:32 crc kubenswrapper[4624]: I1008 14:40:32.989728 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c53d0a2-ccb4-43e8-8a32-e327f0062e46-config-data\") pod \"8c53d0a2-ccb4-43e8-8a32-e327f0062e46\" (UID: \"8c53d0a2-ccb4-43e8-8a32-e327f0062e46\") " Oct 08 14:40:32 crc kubenswrapper[4624]: I1008 14:40:32.989801 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c45mw\" (UniqueName: \"kubernetes.io/projected/8c53d0a2-ccb4-43e8-8a32-e327f0062e46-kube-api-access-c45mw\") pod \"8c53d0a2-ccb4-43e8-8a32-e327f0062e46\" (UID: \"8c53d0a2-ccb4-43e8-8a32-e327f0062e46\") " Oct 08 14:40:32 crc kubenswrapper[4624]: I1008 14:40:32.995386 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c53d0a2-ccb4-43e8-8a32-e327f0062e46-kube-api-access-c45mw" (OuterVolumeSpecName: "kube-api-access-c45mw") pod "8c53d0a2-ccb4-43e8-8a32-e327f0062e46" (UID: "8c53d0a2-ccb4-43e8-8a32-e327f0062e46"). InnerVolumeSpecName "kube-api-access-c45mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:33 crc kubenswrapper[4624]: I1008 14:40:33.022589 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c53d0a2-ccb4-43e8-8a32-e327f0062e46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c53d0a2-ccb4-43e8-8a32-e327f0062e46" (UID: "8c53d0a2-ccb4-43e8-8a32-e327f0062e46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:40:33 crc kubenswrapper[4624]: I1008 14:40:33.046281 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c53d0a2-ccb4-43e8-8a32-e327f0062e46-config-data" (OuterVolumeSpecName: "config-data") pod "8c53d0a2-ccb4-43e8-8a32-e327f0062e46" (UID: "8c53d0a2-ccb4-43e8-8a32-e327f0062e46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:40:33 crc kubenswrapper[4624]: I1008 14:40:33.092065 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c53d0a2-ccb4-43e8-8a32-e327f0062e46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:33 crc kubenswrapper[4624]: I1008 14:40:33.092101 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c53d0a2-ccb4-43e8-8a32-e327f0062e46-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:33 crc kubenswrapper[4624]: I1008 14:40:33.092110 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c45mw\" (UniqueName: \"kubernetes.io/projected/8c53d0a2-ccb4-43e8-8a32-e327f0062e46-kube-api-access-c45mw\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:33 crc kubenswrapper[4624]: I1008 14:40:33.905944 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5988746689-hdfjh" event={"ID":"64838efa-26a3-4de6-aec8-ccf4457b41d1","Type":"ContainerStarted","Data":"efea21f99ac540a5da4da839c6193775952edad20180c5e66fe4bfc55b1edd84"} Oct 08 14:40:33 crc kubenswrapper[4624]: I1008 14:40:33.906227 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:33 crc kubenswrapper[4624]: I1008 14:40:33.936868 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5988746689-hdfjh" podStartSLOduration=2.9368523140000002 podStartE2EDuration="2.936852314s" podCreationTimestamp="2025-10-08 14:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:40:33.934945326 +0000 UTC m=+1059.085880403" watchObservedRunningTime="2025-10-08 14:40:33.936852314 +0000 UTC m=+1059.087787391" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.005951 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5988746689-hdfjh"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.046532 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-w2w8t"] Oct 08 14:40:34 crc kubenswrapper[4624]: E1008 14:40:34.051170 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c53d0a2-ccb4-43e8-8a32-e327f0062e46" containerName="keystone-db-sync" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.051202 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c53d0a2-ccb4-43e8-8a32-e327f0062e46" containerName="keystone-db-sync" Oct 08 14:40:34 crc kubenswrapper[4624]: E1008 14:40:34.051224 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c100d95-e9a3-4731-8241-41ec45b0d9f9" containerName="ovn-config" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.051256 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c100d95-e9a3-4731-8241-41ec45b0d9f9" containerName="ovn-config" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.051470 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c100d95-e9a3-4731-8241-41ec45b0d9f9" containerName="ovn-config" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.051494 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c53d0a2-ccb4-43e8-8a32-e327f0062e46" containerName="keystone-db-sync" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.052072 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.061513 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.061737 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.075938 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.076169 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r9xgf" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.112839 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-w2w8t"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.183270 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8469c9d7c9-lp2mc"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.187280 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.250160 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-dns-svc\") pod \"dnsmasq-dns-8469c9d7c9-lp2mc\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.250228 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-combined-ca-bundle\") pod \"keystone-bootstrap-w2w8t\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.250296 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-ovsdbserver-nb\") pod \"dnsmasq-dns-8469c9d7c9-lp2mc\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.250322 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-config-data\") pod \"keystone-bootstrap-w2w8t\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.250348 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-ovsdbserver-sb\") pod \"dnsmasq-dns-8469c9d7c9-lp2mc\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.250381 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cz7s\" (UniqueName: \"kubernetes.io/projected/f5cf7506-9d70-4e79-a413-9f26728cc627-kube-api-access-8cz7s\") pod \"dnsmasq-dns-8469c9d7c9-lp2mc\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.250436 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpxhw\" (UniqueName: \"kubernetes.io/projected/d867582d-bc6b-4693-b2db-1d5a8142f844-kube-api-access-jpxhw\") pod \"keystone-bootstrap-w2w8t\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.250459 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-dns-swift-storage-0\") pod \"dnsmasq-dns-8469c9d7c9-lp2mc\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.250503 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-scripts\") pod \"keystone-bootstrap-w2w8t\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.250536 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-fernet-keys\") pod \"keystone-bootstrap-w2w8t\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.250591 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-config\") pod \"dnsmasq-dns-8469c9d7c9-lp2mc\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.250612 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-credential-keys\") pod \"keystone-bootstrap-w2w8t\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.293973 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8469c9d7c9-lp2mc"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.328727 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-hv6dc"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.329958 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.335177 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.335409 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ts69p" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.335579 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.347930 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-f6sz5"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.349299 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-f6sz5" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.367960 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-hv6dc"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.371143 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-bh2tl" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.371527 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.386771 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9bz8\" (UniqueName: \"kubernetes.io/projected/265c0058-98f4-4bcd-b413-8e3633ab56cd-kube-api-access-b9bz8\") pod \"heat-db-sync-f6sz5\" (UID: \"265c0058-98f4-4bcd-b413-8e3633ab56cd\") " pod="openstack/heat-db-sync-f6sz5" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.386824 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-config-data\") pod \"cinder-db-sync-hv6dc\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.386858 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-scripts\") pod \"cinder-db-sync-hv6dc\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.386889 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-ovsdbserver-nb\") pod \"dnsmasq-dns-8469c9d7c9-lp2mc\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.386923 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-config-data\") pod \"keystone-bootstrap-w2w8t\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.386948 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-ovsdbserver-sb\") pod \"dnsmasq-dns-8469c9d7c9-lp2mc\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.386979 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cz7s\" (UniqueName: \"kubernetes.io/projected/f5cf7506-9d70-4e79-a413-9f26728cc627-kube-api-access-8cz7s\") pod \"dnsmasq-dns-8469c9d7c9-lp2mc\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.387019 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/265c0058-98f4-4bcd-b413-8e3633ab56cd-config-data\") pod \"heat-db-sync-f6sz5\" (UID: \"265c0058-98f4-4bcd-b413-8e3633ab56cd\") " pod="openstack/heat-db-sync-f6sz5" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.387054 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpxhw\" (UniqueName: \"kubernetes.io/projected/d867582d-bc6b-4693-b2db-1d5a8142f844-kube-api-access-jpxhw\") pod \"keystone-bootstrap-w2w8t\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.387076 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-db-sync-config-data\") pod \"cinder-db-sync-hv6dc\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.387100 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-dns-swift-storage-0\") pod \"dnsmasq-dns-8469c9d7c9-lp2mc\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.387136 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-scripts\") pod \"keystone-bootstrap-w2w8t\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.387170 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-fernet-keys\") pod \"keystone-bootstrap-w2w8t\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.387208 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bqvv\" (UniqueName: \"kubernetes.io/projected/cd53e103-7b25-4f61-a0f4-675ace133ab7-kube-api-access-7bqvv\") pod \"cinder-db-sync-hv6dc\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.387234 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/265c0058-98f4-4bcd-b413-8e3633ab56cd-combined-ca-bundle\") pod \"heat-db-sync-f6sz5\" (UID: \"265c0058-98f4-4bcd-b413-8e3633ab56cd\") " pod="openstack/heat-db-sync-f6sz5" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.387271 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-config\") pod \"dnsmasq-dns-8469c9d7c9-lp2mc\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.387299 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-credential-keys\") pod \"keystone-bootstrap-w2w8t\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.387325 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd53e103-7b25-4f61-a0f4-675ace133ab7-etc-machine-id\") pod \"cinder-db-sync-hv6dc\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.387370 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-dns-svc\") pod \"dnsmasq-dns-8469c9d7c9-lp2mc\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.387395 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-combined-ca-bundle\") pod \"keystone-bootstrap-w2w8t\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.387422 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-combined-ca-bundle\") pod \"cinder-db-sync-hv6dc\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.388434 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-ovsdbserver-nb\") pod \"dnsmasq-dns-8469c9d7c9-lp2mc\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.398795 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-fernet-keys\") pod \"keystone-bootstrap-w2w8t\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.413719 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-ovsdbserver-sb\") pod \"dnsmasq-dns-8469c9d7c9-lp2mc\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.414732 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-dns-svc\") pod \"dnsmasq-dns-8469c9d7c9-lp2mc\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.414837 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-config\") pod \"dnsmasq-dns-8469c9d7c9-lp2mc\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.425308 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-scripts\") pod \"keystone-bootstrap-w2w8t\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.427680 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-dns-swift-storage-0\") pod \"dnsmasq-dns-8469c9d7c9-lp2mc\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.434861 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-credential-keys\") pod \"keystone-bootstrap-w2w8t\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.435497 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-config-data\") pod \"keystone-bootstrap-w2w8t\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.443755 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-f6sz5"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.444446 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-combined-ca-bundle\") pod \"keystone-bootstrap-w2w8t\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.490663 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpxhw\" (UniqueName: \"kubernetes.io/projected/d867582d-bc6b-4693-b2db-1d5a8142f844-kube-api-access-jpxhw\") pod \"keystone-bootstrap-w2w8t\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.491045 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd53e103-7b25-4f61-a0f4-675ace133ab7-etc-machine-id\") pod \"cinder-db-sync-hv6dc\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.491106 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-combined-ca-bundle\") pod \"cinder-db-sync-hv6dc\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.491137 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9bz8\" (UniqueName: \"kubernetes.io/projected/265c0058-98f4-4bcd-b413-8e3633ab56cd-kube-api-access-b9bz8\") pod \"heat-db-sync-f6sz5\" (UID: \"265c0058-98f4-4bcd-b413-8e3633ab56cd\") " pod="openstack/heat-db-sync-f6sz5" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.491171 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-config-data\") pod \"cinder-db-sync-hv6dc\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.491200 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-scripts\") pod \"cinder-db-sync-hv6dc\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.491269 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/265c0058-98f4-4bcd-b413-8e3633ab56cd-config-data\") pod \"heat-db-sync-f6sz5\" (UID: \"265c0058-98f4-4bcd-b413-8e3633ab56cd\") " pod="openstack/heat-db-sync-f6sz5" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.491299 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-db-sync-config-data\") pod \"cinder-db-sync-hv6dc\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.491397 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bqvv\" (UniqueName: \"kubernetes.io/projected/cd53e103-7b25-4f61-a0f4-675ace133ab7-kube-api-access-7bqvv\") pod \"cinder-db-sync-hv6dc\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.491426 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/265c0058-98f4-4bcd-b413-8e3633ab56cd-combined-ca-bundle\") pod \"heat-db-sync-f6sz5\" (UID: \"265c0058-98f4-4bcd-b413-8e3633ab56cd\") " pod="openstack/heat-db-sync-f6sz5" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.497075 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cz7s\" (UniqueName: \"kubernetes.io/projected/f5cf7506-9d70-4e79-a413-9f26728cc627-kube-api-access-8cz7s\") pod \"dnsmasq-dns-8469c9d7c9-lp2mc\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.509979 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd53e103-7b25-4f61-a0f4-675ace133ab7-etc-machine-id\") pod \"cinder-db-sync-hv6dc\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.511814 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-config-data\") pod \"cinder-db-sync-hv6dc\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.516003 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-combined-ca-bundle\") pod \"cinder-db-sync-hv6dc\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.546698 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/265c0058-98f4-4bcd-b413-8e3633ab56cd-config-data\") pod \"heat-db-sync-f6sz5\" (UID: \"265c0058-98f4-4bcd-b413-8e3633ab56cd\") " pod="openstack/heat-db-sync-f6sz5" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.547052 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-db-sync-config-data\") pod \"cinder-db-sync-hv6dc\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.547225 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-scripts\") pod \"cinder-db-sync-hv6dc\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.548530 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/265c0058-98f4-4bcd-b413-8e3633ab56cd-combined-ca-bundle\") pod \"heat-db-sync-f6sz5\" (UID: \"265c0058-98f4-4bcd-b413-8e3633ab56cd\") " pod="openstack/heat-db-sync-f6sz5" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.550030 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5bd55fb965-sh6wc"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.566788 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.577965 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.597409 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.598115 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-77fcx" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.610268 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.610566 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.612895 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9bz8\" (UniqueName: \"kubernetes.io/projected/265c0058-98f4-4bcd-b413-8e3633ab56cd-kube-api-access-b9bz8\") pod \"heat-db-sync-f6sz5\" (UID: \"265c0058-98f4-4bcd-b413-8e3633ab56cd\") " pod="openstack/heat-db-sync-f6sz5" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.643816 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bqvv\" (UniqueName: \"kubernetes.io/projected/cd53e103-7b25-4f61-a0f4-675ace133ab7-kube-api-access-7bqvv\") pod \"cinder-db-sync-hv6dc\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.688709 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.698822 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bd55fb965-sh6wc"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.706396 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.709094 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.713616 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.715174 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.715382 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.752203 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw77z\" (UniqueName: \"kubernetes.io/projected/bd54784e-4e69-41ae-a216-2adc870ebc63-kube-api-access-vw77z\") pod \"horizon-5bd55fb965-sh6wc\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.752325 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd54784e-4e69-41ae-a216-2adc870ebc63-logs\") pod \"horizon-5bd55fb965-sh6wc\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.752428 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd54784e-4e69-41ae-a216-2adc870ebc63-horizon-secret-key\") pod \"horizon-5bd55fb965-sh6wc\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.752455 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd54784e-4e69-41ae-a216-2adc870ebc63-config-data\") pod \"horizon-5bd55fb965-sh6wc\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.752477 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd54784e-4e69-41ae-a216-2adc870ebc63-scripts\") pod \"horizon-5bd55fb965-sh6wc\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.770073 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.776466 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-xkc5w"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.777624 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xkc5w" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.801830 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-5g2km" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.801969 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.802111 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.809214 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8469c9d7c9-lp2mc"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.859190 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xkc5w"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.861780 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd54784e-4e69-41ae-a216-2adc870ebc63-scripts\") pod \"horizon-5bd55fb965-sh6wc\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.861837 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd54784e-4e69-41ae-a216-2adc870ebc63-config-data\") pod \"horizon-5bd55fb965-sh6wc\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.861884 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-config-data\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.862002 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.862035 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw77z\" (UniqueName: \"kubernetes.io/projected/bd54784e-4e69-41ae-a216-2adc870ebc63-kube-api-access-vw77z\") pod \"horizon-5bd55fb965-sh6wc\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.862087 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf057e0-e090-498a-bb8b-32835373555c-log-httpd\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.862109 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-scripts\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.862142 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6x59\" (UniqueName: \"kubernetes.io/projected/4bf057e0-e090-498a-bb8b-32835373555c-kube-api-access-z6x59\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.862203 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf057e0-e090-498a-bb8b-32835373555c-run-httpd\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.862233 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd54784e-4e69-41ae-a216-2adc870ebc63-logs\") pod \"horizon-5bd55fb965-sh6wc\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.862328 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.862415 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd54784e-4e69-41ae-a216-2adc870ebc63-horizon-secret-key\") pod \"horizon-5bd55fb965-sh6wc\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.863278 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd54784e-4e69-41ae-a216-2adc870ebc63-scripts\") pod \"horizon-5bd55fb965-sh6wc\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.864297 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd54784e-4e69-41ae-a216-2adc870ebc63-config-data\") pod \"horizon-5bd55fb965-sh6wc\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.865107 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd54784e-4e69-41ae-a216-2adc870ebc63-logs\") pod \"horizon-5bd55fb965-sh6wc\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.871561 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d44547b9c-2v5ck"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.873220 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.875741 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd54784e-4e69-41ae-a216-2adc870ebc63-horizon-secret-key\") pod \"horizon-5bd55fb965-sh6wc\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.896720 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d44547b9c-2v5ck"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.907588 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw77z\" (UniqueName: \"kubernetes.io/projected/bd54784e-4e69-41ae-a216-2adc870ebc63-kube-api-access-vw77z\") pod \"horizon-5bd55fb965-sh6wc\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.912087 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-f6sz5" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.918017 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-z5qqs"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.919078 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z5qqs" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.921891 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.922569 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.922805 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-krwdj" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.927854 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-z5qqs"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.935212 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-lfgwx"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.937885 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lfgwx" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.940130 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.940440 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-kx29b" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.960591 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-b94dcd745-2cb6d"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.962274 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.963556 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf057e0-e090-498a-bb8b-32835373555c-run-httpd\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.963682 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.963733 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0efe065-02ed-472d-b560-6ddfcee095c4-combined-ca-bundle\") pod \"placement-db-sync-xkc5w\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " pod="openstack/placement-db-sync-xkc5w" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.963761 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0efe065-02ed-472d-b560-6ddfcee095c4-scripts\") pod \"placement-db-sync-xkc5w\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " pod="openstack/placement-db-sync-xkc5w" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.963896 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-dns-swift-storage-0\") pod \"dnsmasq-dns-d44547b9c-2v5ck\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.963921 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0efe065-02ed-472d-b560-6ddfcee095c4-config-data\") pod \"placement-db-sync-xkc5w\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " pod="openstack/placement-db-sync-xkc5w" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.963965 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0efe065-02ed-472d-b560-6ddfcee095c4-logs\") pod \"placement-db-sync-xkc5w\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " pod="openstack/placement-db-sync-xkc5w" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.963990 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-config-data\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.964020 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-ovsdbserver-nb\") pod \"dnsmasq-dns-d44547b9c-2v5ck\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.964045 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh2sx\" (UniqueName: \"kubernetes.io/projected/7c18acac-a79e-4dae-97d2-f81c60c2570b-kube-api-access-mh2sx\") pod \"dnsmasq-dns-d44547b9c-2v5ck\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.964103 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm49k\" (UniqueName: \"kubernetes.io/projected/a0efe065-02ed-472d-b560-6ddfcee095c4-kube-api-access-cm49k\") pod \"placement-db-sync-xkc5w\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " pod="openstack/placement-db-sync-xkc5w" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.964204 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-config\") pod \"dnsmasq-dns-d44547b9c-2v5ck\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.964229 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.964272 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-dns-svc\") pod \"dnsmasq-dns-d44547b9c-2v5ck\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.964295 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf057e0-e090-498a-bb8b-32835373555c-log-httpd\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.964315 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-scripts\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.964342 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6x59\" (UniqueName: \"kubernetes.io/projected/4bf057e0-e090-498a-bb8b-32835373555c-kube-api-access-z6x59\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.964378 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-ovsdbserver-sb\") pod \"dnsmasq-dns-d44547b9c-2v5ck\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.965155 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf057e0-e090-498a-bb8b-32835373555c-run-httpd\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.976633 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.981406 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-config-data\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.983703 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lfgwx"] Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.983784 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf057e0-e090-498a-bb8b-32835373555c-log-httpd\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.985314 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.988493 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-scripts\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:34 crc kubenswrapper[4624]: I1008 14:40:34.989736 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.003450 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b94dcd745-2cb6d"] Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.025035 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6x59\" (UniqueName: \"kubernetes.io/projected/4bf057e0-e090-498a-bb8b-32835373555c-kube-api-access-z6x59\") pod \"ceilometer-0\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " pod="openstack/ceilometer-0" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.037982 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.067768 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-dns-svc\") pod \"dnsmasq-dns-d44547b9c-2v5ck\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.067829 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-ovsdbserver-sb\") pod \"dnsmasq-dns-d44547b9c-2v5ck\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.067878 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1d554826-ab8b-40f6-9be4-ad2949010968-db-sync-config-data\") pod \"barbican-db-sync-lfgwx\" (UID: \"1d554826-ab8b-40f6-9be4-ad2949010968\") " pod="openstack/barbican-db-sync-lfgwx" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.067940 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98ca78d1-e9ac-4556-99fa-f012b4461b9a-scripts\") pod \"horizon-b94dcd745-2cb6d\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.067981 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdrpx\" (UniqueName: \"kubernetes.io/projected/791f20b1-069c-4d9d-b8ed-eb1330a191f8-kube-api-access-xdrpx\") pod \"neutron-db-sync-z5qqs\" (UID: \"791f20b1-069c-4d9d-b8ed-eb1330a191f8\") " pod="openstack/neutron-db-sync-z5qqs" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.068048 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98ca78d1-e9ac-4556-99fa-f012b4461b9a-logs\") pod \"horizon-b94dcd745-2cb6d\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.068077 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0efe065-02ed-472d-b560-6ddfcee095c4-combined-ca-bundle\") pod \"placement-db-sync-xkc5w\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " pod="openstack/placement-db-sync-xkc5w" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.068108 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0efe065-02ed-472d-b560-6ddfcee095c4-scripts\") pod \"placement-db-sync-xkc5w\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " pod="openstack/placement-db-sync-xkc5w" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.068155 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98ca78d1-e9ac-4556-99fa-f012b4461b9a-config-data\") pod \"horizon-b94dcd745-2cb6d\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.068186 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-dns-swift-storage-0\") pod \"dnsmasq-dns-d44547b9c-2v5ck\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.068208 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791f20b1-069c-4d9d-b8ed-eb1330a191f8-combined-ca-bundle\") pod \"neutron-db-sync-z5qqs\" (UID: \"791f20b1-069c-4d9d-b8ed-eb1330a191f8\") " pod="openstack/neutron-db-sync-z5qqs" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.068231 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0efe065-02ed-472d-b560-6ddfcee095c4-config-data\") pod \"placement-db-sync-xkc5w\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " pod="openstack/placement-db-sync-xkc5w" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.068257 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d554826-ab8b-40f6-9be4-ad2949010968-combined-ca-bundle\") pod \"barbican-db-sync-lfgwx\" (UID: \"1d554826-ab8b-40f6-9be4-ad2949010968\") " pod="openstack/barbican-db-sync-lfgwx" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.068293 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k22j\" (UniqueName: \"kubernetes.io/projected/1d554826-ab8b-40f6-9be4-ad2949010968-kube-api-access-8k22j\") pod \"barbican-db-sync-lfgwx\" (UID: \"1d554826-ab8b-40f6-9be4-ad2949010968\") " pod="openstack/barbican-db-sync-lfgwx" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.068314 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0efe065-02ed-472d-b560-6ddfcee095c4-logs\") pod \"placement-db-sync-xkc5w\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " pod="openstack/placement-db-sync-xkc5w" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.068350 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-ovsdbserver-nb\") pod \"dnsmasq-dns-d44547b9c-2v5ck\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.068371 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhv2w\" (UniqueName: \"kubernetes.io/projected/98ca78d1-e9ac-4556-99fa-f012b4461b9a-kube-api-access-qhv2w\") pod \"horizon-b94dcd745-2cb6d\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.068398 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh2sx\" (UniqueName: \"kubernetes.io/projected/7c18acac-a79e-4dae-97d2-f81c60c2570b-kube-api-access-mh2sx\") pod \"dnsmasq-dns-d44547b9c-2v5ck\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.068426 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/98ca78d1-e9ac-4556-99fa-f012b4461b9a-horizon-secret-key\") pod \"horizon-b94dcd745-2cb6d\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.068483 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm49k\" (UniqueName: \"kubernetes.io/projected/a0efe065-02ed-472d-b560-6ddfcee095c4-kube-api-access-cm49k\") pod \"placement-db-sync-xkc5w\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " pod="openstack/placement-db-sync-xkc5w" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.068506 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/791f20b1-069c-4d9d-b8ed-eb1330a191f8-config\") pod \"neutron-db-sync-z5qqs\" (UID: \"791f20b1-069c-4d9d-b8ed-eb1330a191f8\") " pod="openstack/neutron-db-sync-z5qqs" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.068534 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-config\") pod \"dnsmasq-dns-d44547b9c-2v5ck\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.069975 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-config\") pod \"dnsmasq-dns-d44547b9c-2v5ck\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.082936 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-dns-svc\") pod \"dnsmasq-dns-d44547b9c-2v5ck\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.083802 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-ovsdbserver-sb\") pod \"dnsmasq-dns-d44547b9c-2v5ck\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.084502 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-ovsdbserver-nb\") pod \"dnsmasq-dns-d44547b9c-2v5ck\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.084914 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0efe065-02ed-472d-b560-6ddfcee095c4-logs\") pod \"placement-db-sync-xkc5w\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " pod="openstack/placement-db-sync-xkc5w" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.085255 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-dns-swift-storage-0\") pod \"dnsmasq-dns-d44547b9c-2v5ck\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.092389 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0efe065-02ed-472d-b560-6ddfcee095c4-config-data\") pod \"placement-db-sync-xkc5w\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " pod="openstack/placement-db-sync-xkc5w" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.105684 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0efe065-02ed-472d-b560-6ddfcee095c4-scripts\") pod \"placement-db-sync-xkc5w\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " pod="openstack/placement-db-sync-xkc5w" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.107396 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0efe065-02ed-472d-b560-6ddfcee095c4-combined-ca-bundle\") pod \"placement-db-sync-xkc5w\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " pod="openstack/placement-db-sync-xkc5w" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.112526 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm49k\" (UniqueName: \"kubernetes.io/projected/a0efe065-02ed-472d-b560-6ddfcee095c4-kube-api-access-cm49k\") pod \"placement-db-sync-xkc5w\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " pod="openstack/placement-db-sync-xkc5w" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.124021 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh2sx\" (UniqueName: \"kubernetes.io/projected/7c18acac-a79e-4dae-97d2-f81c60c2570b-kube-api-access-mh2sx\") pod \"dnsmasq-dns-d44547b9c-2v5ck\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.170035 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98ca78d1-e9ac-4556-99fa-f012b4461b9a-logs\") pod \"horizon-b94dcd745-2cb6d\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.170165 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98ca78d1-e9ac-4556-99fa-f012b4461b9a-config-data\") pod \"horizon-b94dcd745-2cb6d\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.170206 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791f20b1-069c-4d9d-b8ed-eb1330a191f8-combined-ca-bundle\") pod \"neutron-db-sync-z5qqs\" (UID: \"791f20b1-069c-4d9d-b8ed-eb1330a191f8\") " pod="openstack/neutron-db-sync-z5qqs" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.170236 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d554826-ab8b-40f6-9be4-ad2949010968-combined-ca-bundle\") pod \"barbican-db-sync-lfgwx\" (UID: \"1d554826-ab8b-40f6-9be4-ad2949010968\") " pod="openstack/barbican-db-sync-lfgwx" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.170268 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k22j\" (UniqueName: \"kubernetes.io/projected/1d554826-ab8b-40f6-9be4-ad2949010968-kube-api-access-8k22j\") pod \"barbican-db-sync-lfgwx\" (UID: \"1d554826-ab8b-40f6-9be4-ad2949010968\") " pod="openstack/barbican-db-sync-lfgwx" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.170308 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhv2w\" (UniqueName: \"kubernetes.io/projected/98ca78d1-e9ac-4556-99fa-f012b4461b9a-kube-api-access-qhv2w\") pod \"horizon-b94dcd745-2cb6d\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.170343 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/98ca78d1-e9ac-4556-99fa-f012b4461b9a-horizon-secret-key\") pod \"horizon-b94dcd745-2cb6d\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.170382 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/791f20b1-069c-4d9d-b8ed-eb1330a191f8-config\") pod \"neutron-db-sync-z5qqs\" (UID: \"791f20b1-069c-4d9d-b8ed-eb1330a191f8\") " pod="openstack/neutron-db-sync-z5qqs" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.170468 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1d554826-ab8b-40f6-9be4-ad2949010968-db-sync-config-data\") pod \"barbican-db-sync-lfgwx\" (UID: \"1d554826-ab8b-40f6-9be4-ad2949010968\") " pod="openstack/barbican-db-sync-lfgwx" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.170514 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98ca78d1-e9ac-4556-99fa-f012b4461b9a-scripts\") pod \"horizon-b94dcd745-2cb6d\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.170546 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdrpx\" (UniqueName: \"kubernetes.io/projected/791f20b1-069c-4d9d-b8ed-eb1330a191f8-kube-api-access-xdrpx\") pod \"neutron-db-sync-z5qqs\" (UID: \"791f20b1-069c-4d9d-b8ed-eb1330a191f8\") " pod="openstack/neutron-db-sync-z5qqs" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.171441 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98ca78d1-e9ac-4556-99fa-f012b4461b9a-logs\") pod \"horizon-b94dcd745-2cb6d\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.172611 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98ca78d1-e9ac-4556-99fa-f012b4461b9a-config-data\") pod \"horizon-b94dcd745-2cb6d\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.180520 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98ca78d1-e9ac-4556-99fa-f012b4461b9a-scripts\") pod \"horizon-b94dcd745-2cb6d\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.184689 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791f20b1-069c-4d9d-b8ed-eb1330a191f8-combined-ca-bundle\") pod \"neutron-db-sync-z5qqs\" (UID: \"791f20b1-069c-4d9d-b8ed-eb1330a191f8\") " pod="openstack/neutron-db-sync-z5qqs" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.185189 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d554826-ab8b-40f6-9be4-ad2949010968-combined-ca-bundle\") pod \"barbican-db-sync-lfgwx\" (UID: \"1d554826-ab8b-40f6-9be4-ad2949010968\") " pod="openstack/barbican-db-sync-lfgwx" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.187039 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/791f20b1-069c-4d9d-b8ed-eb1330a191f8-config\") pod \"neutron-db-sync-z5qqs\" (UID: \"791f20b1-069c-4d9d-b8ed-eb1330a191f8\") " pod="openstack/neutron-db-sync-z5qqs" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.187588 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1d554826-ab8b-40f6-9be4-ad2949010968-db-sync-config-data\") pod \"barbican-db-sync-lfgwx\" (UID: \"1d554826-ab8b-40f6-9be4-ad2949010968\") " pod="openstack/barbican-db-sync-lfgwx" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.198564 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhv2w\" (UniqueName: \"kubernetes.io/projected/98ca78d1-e9ac-4556-99fa-f012b4461b9a-kube-api-access-qhv2w\") pod \"horizon-b94dcd745-2cb6d\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.203312 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdrpx\" (UniqueName: \"kubernetes.io/projected/791f20b1-069c-4d9d-b8ed-eb1330a191f8-kube-api-access-xdrpx\") pod \"neutron-db-sync-z5qqs\" (UID: \"791f20b1-069c-4d9d-b8ed-eb1330a191f8\") " pod="openstack/neutron-db-sync-z5qqs" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.209123 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/98ca78d1-e9ac-4556-99fa-f012b4461b9a-horizon-secret-key\") pod \"horizon-b94dcd745-2cb6d\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.214152 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k22j\" (UniqueName: \"kubernetes.io/projected/1d554826-ab8b-40f6-9be4-ad2949010968-kube-api-access-8k22j\") pod \"barbican-db-sync-lfgwx\" (UID: \"1d554826-ab8b-40f6-9be4-ad2949010968\") " pod="openstack/barbican-db-sync-lfgwx" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.240258 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.281932 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z5qqs" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.287817 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lfgwx" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.320824 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.403591 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xkc5w" Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.587552 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8469c9d7c9-lp2mc"] Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.776459 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-hv6dc"] Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.797798 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-f6sz5"] Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.977004 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bd55fb965-sh6wc"] Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.981130 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hv6dc" event={"ID":"cd53e103-7b25-4f61-a0f4-675ace133ab7","Type":"ContainerStarted","Data":"2ae41170a3b74cf731ef31bcbd52a78309f0cfb23eee8e36f59752e0f4914529"} Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.982382 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" event={"ID":"f5cf7506-9d70-4e79-a413-9f26728cc627","Type":"ContainerStarted","Data":"c0d44c27d18afd6c381c25e74f5067f3c7813fa3dd596f5ca281360420e2f10f"} Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.984936 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-w2w8t"] Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.987095 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-f6sz5" event={"ID":"265c0058-98f4-4bcd-b413-8e3633ab56cd","Type":"ContainerStarted","Data":"2c7ec8b394d394172b1920c780714c871d854655826995719f23c86628a91c05"} Oct 08 14:40:35 crc kubenswrapper[4624]: I1008 14:40:35.987277 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5988746689-hdfjh" podUID="64838efa-26a3-4de6-aec8-ccf4457b41d1" containerName="dnsmasq-dns" containerID="cri-o://efea21f99ac540a5da4da839c6193775952edad20180c5e66fe4bfc55b1edd84" gracePeriod=10 Oct 08 14:40:36 crc kubenswrapper[4624]: I1008 14:40:36.202189 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lfgwx"] Oct 08 14:40:36 crc kubenswrapper[4624]: I1008 14:40:36.245534 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:40:36 crc kubenswrapper[4624]: W1008 14:40:36.248788 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d554826_ab8b_40f6_9be4_ad2949010968.slice/crio-6f168b7e240d7c4eefff2cc2365586bb1346e5d577a2c6d28c3b0081fb59d47e WatchSource:0}: Error finding container 6f168b7e240d7c4eefff2cc2365586bb1346e5d577a2c6d28c3b0081fb59d47e: Status 404 returned error can't find the container with id 6f168b7e240d7c4eefff2cc2365586bb1346e5d577a2c6d28c3b0081fb59d47e Oct 08 14:40:36 crc kubenswrapper[4624]: I1008 14:40:36.292554 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d44547b9c-2v5ck"] Oct 08 14:40:36 crc kubenswrapper[4624]: W1008 14:40:36.327231 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c18acac_a79e_4dae_97d2_f81c60c2570b.slice/crio-e5493dabe7055eaaff17ec8c196fd8f2e26b803d0b15c33e7fcd5b08102028a3 WatchSource:0}: Error finding container e5493dabe7055eaaff17ec8c196fd8f2e26b803d0b15c33e7fcd5b08102028a3: Status 404 returned error can't find the container with id e5493dabe7055eaaff17ec8c196fd8f2e26b803d0b15c33e7fcd5b08102028a3 Oct 08 14:40:36 crc kubenswrapper[4624]: W1008 14:40:36.428768 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod791f20b1_069c_4d9d_b8ed_eb1330a191f8.slice/crio-2f1d179a42df412d4312bc8b76b3601cf30c752dc540ce20776bf98bd85733e8 WatchSource:0}: Error finding container 2f1d179a42df412d4312bc8b76b3601cf30c752dc540ce20776bf98bd85733e8: Status 404 returned error can't find the container with id 2f1d179a42df412d4312bc8b76b3601cf30c752dc540ce20776bf98bd85733e8 Oct 08 14:40:36 crc kubenswrapper[4624]: I1008 14:40:36.446869 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-z5qqs"] Oct 08 14:40:36 crc kubenswrapper[4624]: I1008 14:40:36.454701 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b94dcd745-2cb6d"] Oct 08 14:40:36 crc kubenswrapper[4624]: I1008 14:40:36.592467 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xkc5w"] Oct 08 14:40:36 crc kubenswrapper[4624]: I1008 14:40:36.875121 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.044410 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-ovsdbserver-sb\") pod \"64838efa-26a3-4de6-aec8-ccf4457b41d1\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.044477 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-dns-swift-storage-0\") pod \"64838efa-26a3-4de6-aec8-ccf4457b41d1\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.044532 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-config\") pod \"64838efa-26a3-4de6-aec8-ccf4457b41d1\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.044583 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngx6k\" (UniqueName: \"kubernetes.io/projected/64838efa-26a3-4de6-aec8-ccf4457b41d1-kube-api-access-ngx6k\") pod \"64838efa-26a3-4de6-aec8-ccf4457b41d1\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.044753 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-ovsdbserver-nb\") pod \"64838efa-26a3-4de6-aec8-ccf4457b41d1\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.044791 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-dns-svc\") pod \"64838efa-26a3-4de6-aec8-ccf4457b41d1\" (UID: \"64838efa-26a3-4de6-aec8-ccf4457b41d1\") " Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.054353 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64838efa-26a3-4de6-aec8-ccf4457b41d1-kube-api-access-ngx6k" (OuterVolumeSpecName: "kube-api-access-ngx6k") pod "64838efa-26a3-4de6-aec8-ccf4457b41d1" (UID: "64838efa-26a3-4de6-aec8-ccf4457b41d1"). InnerVolumeSpecName "kube-api-access-ngx6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.072888 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lfgwx" event={"ID":"1d554826-ab8b-40f6-9be4-ad2949010968","Type":"ContainerStarted","Data":"6f168b7e240d7c4eefff2cc2365586bb1346e5d577a2c6d28c3b0081fb59d47e"} Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.092451 4624 generic.go:334] "Generic (PLEG): container finished" podID="7c18acac-a79e-4dae-97d2-f81c60c2570b" containerID="9e4bb72326e26b2f3f8ac9531db77410c6912d4afb6dba0521132b94343c9678" exitCode=0 Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.093497 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" event={"ID":"7c18acac-a79e-4dae-97d2-f81c60c2570b","Type":"ContainerDied","Data":"9e4bb72326e26b2f3f8ac9531db77410c6912d4afb6dba0521132b94343c9678"} Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.093869 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" event={"ID":"7c18acac-a79e-4dae-97d2-f81c60c2570b","Type":"ContainerStarted","Data":"e5493dabe7055eaaff17ec8c196fd8f2e26b803d0b15c33e7fcd5b08102028a3"} Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.109742 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xkc5w" event={"ID":"a0efe065-02ed-472d-b560-6ddfcee095c4","Type":"ContainerStarted","Data":"f9f30e7db3be58da8a3f6e995935abbea6ed9c2fd5bdba25cf40d9842f2897f7"} Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.129044 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b94dcd745-2cb6d" event={"ID":"98ca78d1-e9ac-4556-99fa-f012b4461b9a","Type":"ContainerStarted","Data":"7f73f7cbfd403eb23a515c91dba54168bd39509de2c2b3bd1a8845076f6a8860"} Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.158373 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngx6k\" (UniqueName: \"kubernetes.io/projected/64838efa-26a3-4de6-aec8-ccf4457b41d1-kube-api-access-ngx6k\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.159614 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-config" (OuterVolumeSpecName: "config") pod "64838efa-26a3-4de6-aec8-ccf4457b41d1" (UID: "64838efa-26a3-4de6-aec8-ccf4457b41d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.167079 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bd55fb965-sh6wc" event={"ID":"bd54784e-4e69-41ae-a216-2adc870ebc63","Type":"ContainerStarted","Data":"20619dec043725e959beb5f139dfd8f868a20d5a99b59a06b08d90db15ee20c8"} Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.172887 4624 generic.go:334] "Generic (PLEG): container finished" podID="f5cf7506-9d70-4e79-a413-9f26728cc627" containerID="aa5a48172251065d2bbc7b4fee053c2a822db485e2a3dfcdc8924921ad8927e9" exitCode=0 Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.172972 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" event={"ID":"f5cf7506-9d70-4e79-a413-9f26728cc627","Type":"ContainerDied","Data":"aa5a48172251065d2bbc7b4fee053c2a822db485e2a3dfcdc8924921ad8927e9"} Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.252844 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64838efa-26a3-4de6-aec8-ccf4457b41d1" (UID: "64838efa-26a3-4de6-aec8-ccf4457b41d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.255984 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "64838efa-26a3-4de6-aec8-ccf4457b41d1" (UID: "64838efa-26a3-4de6-aec8-ccf4457b41d1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.262012 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.262054 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.262066 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.268617 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "64838efa-26a3-4de6-aec8-ccf4457b41d1" (UID: "64838efa-26a3-4de6-aec8-ccf4457b41d1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.288091 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "64838efa-26a3-4de6-aec8-ccf4457b41d1" (UID: "64838efa-26a3-4de6-aec8-ccf4457b41d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.295898 4624 generic.go:334] "Generic (PLEG): container finished" podID="64838efa-26a3-4de6-aec8-ccf4457b41d1" containerID="efea21f99ac540a5da4da839c6193775952edad20180c5e66fe4bfc55b1edd84" exitCode=0 Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.295979 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5988746689-hdfjh" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.295998 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5988746689-hdfjh" event={"ID":"64838efa-26a3-4de6-aec8-ccf4457b41d1","Type":"ContainerDied","Data":"efea21f99ac540a5da4da839c6193775952edad20180c5e66fe4bfc55b1edd84"} Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.296621 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5988746689-hdfjh" event={"ID":"64838efa-26a3-4de6-aec8-ccf4457b41d1","Type":"ContainerDied","Data":"e30b32cfa84891f9aae5bd0a309ae25a2873ce7c7499c5cdc8c7f9b792d6dcee"} Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.296657 4624 scope.go:117] "RemoveContainer" containerID="efea21f99ac540a5da4da839c6193775952edad20180c5e66fe4bfc55b1edd84" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.332397 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z5qqs" event={"ID":"791f20b1-069c-4d9d-b8ed-eb1330a191f8","Type":"ContainerStarted","Data":"33e4a671d890907137a641308f4b91d42becd83533d10f46fe992a0b7073f5be"} Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.332534 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z5qqs" event={"ID":"791f20b1-069c-4d9d-b8ed-eb1330a191f8","Type":"ContainerStarted","Data":"2f1d179a42df412d4312bc8b76b3601cf30c752dc540ce20776bf98bd85733e8"} Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.347026 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w2w8t" event={"ID":"d867582d-bc6b-4693-b2db-1d5a8142f844","Type":"ContainerStarted","Data":"386ea90a8c50cfec2dd3992a5e652a3eca289efa6de448db18b24b337cc9f503"} Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.348850 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w2w8t" event={"ID":"d867582d-bc6b-4693-b2db-1d5a8142f844","Type":"ContainerStarted","Data":"e29731c96e836cc693de623a8d4531756ee43d5c08c78ac24eb3a8c15ecc2759"} Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.349671 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5988746689-hdfjh"] Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.375191 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.375403 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64838efa-26a3-4de6-aec8-ccf4457b41d1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.376149 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf057e0-e090-498a-bb8b-32835373555c","Type":"ContainerStarted","Data":"3f722bc87a97d43428d1b70c02307aea5c320fd85a197adb2cccbbfc6e0d343e"} Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.387554 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5988746689-hdfjh"] Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.460941 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bd55fb965-sh6wc"] Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.462954 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-z5qqs" podStartSLOduration=3.462920116 podStartE2EDuration="3.462920116s" podCreationTimestamp="2025-10-08 14:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:40:37.441251639 +0000 UTC m=+1062.592186726" watchObservedRunningTime="2025-10-08 14:40:37.462920116 +0000 UTC m=+1062.613855193" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.494447 4624 scope.go:117] "RemoveContainer" containerID="01d161730abc9d9551c983257f9448f51a7e441621a2a10b68a52b36f866c96f" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.520536 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-w2w8t" podStartSLOduration=3.520510988 podStartE2EDuration="3.520510988s" podCreationTimestamp="2025-10-08 14:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:40:37.495107277 +0000 UTC m=+1062.646042354" watchObservedRunningTime="2025-10-08 14:40:37.520510988 +0000 UTC m=+1062.671446065" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.601782 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64838efa-26a3-4de6-aec8-ccf4457b41d1" path="/var/lib/kubelet/pods/64838efa-26a3-4de6-aec8-ccf4457b41d1/volumes" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.602381 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-755547f977-wlvph"] Oct 08 14:40:37 crc kubenswrapper[4624]: E1008 14:40:37.602725 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64838efa-26a3-4de6-aec8-ccf4457b41d1" containerName="init" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.602737 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="64838efa-26a3-4de6-aec8-ccf4457b41d1" containerName="init" Oct 08 14:40:37 crc kubenswrapper[4624]: E1008 14:40:37.602749 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64838efa-26a3-4de6-aec8-ccf4457b41d1" containerName="dnsmasq-dns" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.602754 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="64838efa-26a3-4de6-aec8-ccf4457b41d1" containerName="dnsmasq-dns" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.602908 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="64838efa-26a3-4de6-aec8-ccf4457b41d1" containerName="dnsmasq-dns" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.625905 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-755547f977-wlvph"] Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.626017 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-755547f977-wlvph" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.683786 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b7275cd-f4ee-4888-955e-abcce7337089-logs\") pod \"horizon-755547f977-wlvph\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " pod="openstack/horizon-755547f977-wlvph" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.683968 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b7275cd-f4ee-4888-955e-abcce7337089-config-data\") pod \"horizon-755547f977-wlvph\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " pod="openstack/horizon-755547f977-wlvph" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.684030 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl9hx\" (UniqueName: \"kubernetes.io/projected/2b7275cd-f4ee-4888-955e-abcce7337089-kube-api-access-pl9hx\") pod \"horizon-755547f977-wlvph\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " pod="openstack/horizon-755547f977-wlvph" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.684094 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b7275cd-f4ee-4888-955e-abcce7337089-scripts\") pod \"horizon-755547f977-wlvph\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " pod="openstack/horizon-755547f977-wlvph" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.684180 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b7275cd-f4ee-4888-955e-abcce7337089-horizon-secret-key\") pod \"horizon-755547f977-wlvph\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " pod="openstack/horizon-755547f977-wlvph" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.731346 4624 scope.go:117] "RemoveContainer" containerID="efea21f99ac540a5da4da839c6193775952edad20180c5e66fe4bfc55b1edd84" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.733278 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:40:37 crc kubenswrapper[4624]: E1008 14:40:37.734317 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efea21f99ac540a5da4da839c6193775952edad20180c5e66fe4bfc55b1edd84\": container with ID starting with efea21f99ac540a5da4da839c6193775952edad20180c5e66fe4bfc55b1edd84 not found: ID does not exist" containerID="efea21f99ac540a5da4da839c6193775952edad20180c5e66fe4bfc55b1edd84" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.735928 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efea21f99ac540a5da4da839c6193775952edad20180c5e66fe4bfc55b1edd84"} err="failed to get container status \"efea21f99ac540a5da4da839c6193775952edad20180c5e66fe4bfc55b1edd84\": rpc error: code = NotFound desc = could not find container \"efea21f99ac540a5da4da839c6193775952edad20180c5e66fe4bfc55b1edd84\": container with ID starting with efea21f99ac540a5da4da839c6193775952edad20180c5e66fe4bfc55b1edd84 not found: ID does not exist" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.736048 4624 scope.go:117] "RemoveContainer" containerID="01d161730abc9d9551c983257f9448f51a7e441621a2a10b68a52b36f866c96f" Oct 08 14:40:37 crc kubenswrapper[4624]: E1008 14:40:37.741420 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01d161730abc9d9551c983257f9448f51a7e441621a2a10b68a52b36f866c96f\": container with ID starting with 01d161730abc9d9551c983257f9448f51a7e441621a2a10b68a52b36f866c96f not found: ID does not exist" containerID="01d161730abc9d9551c983257f9448f51a7e441621a2a10b68a52b36f866c96f" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.741582 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d161730abc9d9551c983257f9448f51a7e441621a2a10b68a52b36f866c96f"} err="failed to get container status \"01d161730abc9d9551c983257f9448f51a7e441621a2a10b68a52b36f866c96f\": rpc error: code = NotFound desc = could not find container \"01d161730abc9d9551c983257f9448f51a7e441621a2a10b68a52b36f866c96f\": container with ID starting with 01d161730abc9d9551c983257f9448f51a7e441621a2a10b68a52b36f866c96f not found: ID does not exist" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.786728 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b7275cd-f4ee-4888-955e-abcce7337089-config-data\") pod \"horizon-755547f977-wlvph\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " pod="openstack/horizon-755547f977-wlvph" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.786794 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl9hx\" (UniqueName: \"kubernetes.io/projected/2b7275cd-f4ee-4888-955e-abcce7337089-kube-api-access-pl9hx\") pod \"horizon-755547f977-wlvph\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " pod="openstack/horizon-755547f977-wlvph" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.786832 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b7275cd-f4ee-4888-955e-abcce7337089-scripts\") pod \"horizon-755547f977-wlvph\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " pod="openstack/horizon-755547f977-wlvph" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.786896 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b7275cd-f4ee-4888-955e-abcce7337089-horizon-secret-key\") pod \"horizon-755547f977-wlvph\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " pod="openstack/horizon-755547f977-wlvph" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.786947 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b7275cd-f4ee-4888-955e-abcce7337089-logs\") pod \"horizon-755547f977-wlvph\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " pod="openstack/horizon-755547f977-wlvph" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.787735 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b7275cd-f4ee-4888-955e-abcce7337089-logs\") pod \"horizon-755547f977-wlvph\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " pod="openstack/horizon-755547f977-wlvph" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.788608 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b7275cd-f4ee-4888-955e-abcce7337089-config-data\") pod \"horizon-755547f977-wlvph\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " pod="openstack/horizon-755547f977-wlvph" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.789260 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b7275cd-f4ee-4888-955e-abcce7337089-scripts\") pod \"horizon-755547f977-wlvph\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " pod="openstack/horizon-755547f977-wlvph" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.823320 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b7275cd-f4ee-4888-955e-abcce7337089-horizon-secret-key\") pod \"horizon-755547f977-wlvph\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " pod="openstack/horizon-755547f977-wlvph" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.833232 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl9hx\" (UniqueName: \"kubernetes.io/projected/2b7275cd-f4ee-4888-955e-abcce7337089-kube-api-access-pl9hx\") pod \"horizon-755547f977-wlvph\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " pod="openstack/horizon-755547f977-wlvph" Oct 08 14:40:37 crc kubenswrapper[4624]: I1008 14:40:37.976304 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-755547f977-wlvph" Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.167608 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.304897 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-config\") pod \"f5cf7506-9d70-4e79-a413-9f26728cc627\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.304984 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-ovsdbserver-nb\") pod \"f5cf7506-9d70-4e79-a413-9f26728cc627\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.305599 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-ovsdbserver-sb\") pod \"f5cf7506-9d70-4e79-a413-9f26728cc627\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.305708 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cz7s\" (UniqueName: \"kubernetes.io/projected/f5cf7506-9d70-4e79-a413-9f26728cc627-kube-api-access-8cz7s\") pod \"f5cf7506-9d70-4e79-a413-9f26728cc627\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.305780 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-dns-swift-storage-0\") pod \"f5cf7506-9d70-4e79-a413-9f26728cc627\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.306010 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-dns-svc\") pod \"f5cf7506-9d70-4e79-a413-9f26728cc627\" (UID: \"f5cf7506-9d70-4e79-a413-9f26728cc627\") " Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.322585 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5cf7506-9d70-4e79-a413-9f26728cc627-kube-api-access-8cz7s" (OuterVolumeSpecName: "kube-api-access-8cz7s") pod "f5cf7506-9d70-4e79-a413-9f26728cc627" (UID: "f5cf7506-9d70-4e79-a413-9f26728cc627"). InnerVolumeSpecName "kube-api-access-8cz7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.353560 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f5cf7506-9d70-4e79-a413-9f26728cc627" (UID: "f5cf7506-9d70-4e79-a413-9f26728cc627"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.357848 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f5cf7506-9d70-4e79-a413-9f26728cc627" (UID: "f5cf7506-9d70-4e79-a413-9f26728cc627"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.363781 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-config" (OuterVolumeSpecName: "config") pod "f5cf7506-9d70-4e79-a413-9f26728cc627" (UID: "f5cf7506-9d70-4e79-a413-9f26728cc627"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.379838 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f5cf7506-9d70-4e79-a413-9f26728cc627" (UID: "f5cf7506-9d70-4e79-a413-9f26728cc627"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.417773 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.417818 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.417828 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.417836 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.417845 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cz7s\" (UniqueName: \"kubernetes.io/projected/f5cf7506-9d70-4e79-a413-9f26728cc627-kube-api-access-8cz7s\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.427130 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f5cf7506-9d70-4e79-a413-9f26728cc627" (UID: "f5cf7506-9d70-4e79-a413-9f26728cc627"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.434079 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" event={"ID":"f5cf7506-9d70-4e79-a413-9f26728cc627","Type":"ContainerDied","Data":"c0d44c27d18afd6c381c25e74f5067f3c7813fa3dd596f5ca281360420e2f10f"} Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.434128 4624 scope.go:117] "RemoveContainer" containerID="aa5a48172251065d2bbc7b4fee053c2a822db485e2a3dfcdc8924921ad8927e9" Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.440812 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8469c9d7c9-lp2mc" Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.471778 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" event={"ID":"7c18acac-a79e-4dae-97d2-f81c60c2570b","Type":"ContainerStarted","Data":"ca3363930d4599ac260994fc3b029fcac769028e69bdbdbda0ad3a1a9a4104ff"} Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.472796 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.519050 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" podStartSLOduration=4.519032406 podStartE2EDuration="4.519032406s" podCreationTimestamp="2025-10-08 14:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:40:38.492089017 +0000 UTC m=+1063.643024094" watchObservedRunningTime="2025-10-08 14:40:38.519032406 +0000 UTC m=+1063.669967483" Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.519260 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5cf7506-9d70-4e79-a413-9f26728cc627-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.620832 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8469c9d7c9-lp2mc"] Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.625329 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8469c9d7c9-lp2mc"] Oct 08 14:40:38 crc kubenswrapper[4624]: I1008 14:40:38.826907 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-755547f977-wlvph"] Oct 08 14:40:39 crc kubenswrapper[4624]: I1008 14:40:39.481665 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5cf7506-9d70-4e79-a413-9f26728cc627" path="/var/lib/kubelet/pods/f5cf7506-9d70-4e79-a413-9f26728cc627/volumes" Oct 08 14:40:39 crc kubenswrapper[4624]: I1008 14:40:39.509159 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-755547f977-wlvph" event={"ID":"2b7275cd-f4ee-4888-955e-abcce7337089","Type":"ContainerStarted","Data":"87811cf100eca82be5e359e186aab31026d4432a00f82c38513d142b151c184e"} Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.194993 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b94dcd745-2cb6d"] Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.257535 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f6cd65c74-7vqb5"] Oct 08 14:40:44 crc kubenswrapper[4624]: E1008 14:40:44.258086 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5cf7506-9d70-4e79-a413-9f26728cc627" containerName="init" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.258105 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5cf7506-9d70-4e79-a413-9f26728cc627" containerName="init" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.258341 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5cf7506-9d70-4e79-a413-9f26728cc627" containerName="init" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.259461 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.262983 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.272147 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f6cd65c74-7vqb5"] Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.348490 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-755547f977-wlvph"] Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.358547 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b203558-1aea-4672-871f-d2dca324a585-horizon-tls-certs\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.358651 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddkc4\" (UniqueName: \"kubernetes.io/projected/4b203558-1aea-4672-871f-d2dca324a585-kube-api-access-ddkc4\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.358694 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b203558-1aea-4672-871f-d2dca324a585-config-data\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.358736 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b203558-1aea-4672-871f-d2dca324a585-scripts\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.358787 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4b203558-1aea-4672-871f-d2dca324a585-horizon-secret-key\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.359390 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b203558-1aea-4672-871f-d2dca324a585-combined-ca-bundle\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.359459 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b203558-1aea-4672-871f-d2dca324a585-logs\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.377800 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-67f45f8444-g8bbs"] Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.383959 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.403336 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67f45f8444-g8bbs"] Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.461337 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/378be2ad-3335-409f-b2eb-60b3997ed4f8-scripts\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.461417 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b203558-1aea-4672-871f-d2dca324a585-combined-ca-bundle\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.461439 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b203558-1aea-4672-871f-d2dca324a585-logs\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.461491 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b203558-1aea-4672-871f-d2dca324a585-horizon-tls-certs\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.461520 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/378be2ad-3335-409f-b2eb-60b3997ed4f8-logs\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.461544 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvsbb\" (UniqueName: \"kubernetes.io/projected/378be2ad-3335-409f-b2eb-60b3997ed4f8-kube-api-access-qvsbb\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.461580 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddkc4\" (UniqueName: \"kubernetes.io/projected/4b203558-1aea-4672-871f-d2dca324a585-kube-api-access-ddkc4\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.461605 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/378be2ad-3335-409f-b2eb-60b3997ed4f8-horizon-tls-certs\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.461654 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b203558-1aea-4672-871f-d2dca324a585-config-data\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.461681 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b203558-1aea-4672-871f-d2dca324a585-scripts\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.461709 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/378be2ad-3335-409f-b2eb-60b3997ed4f8-config-data\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.461736 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/378be2ad-3335-409f-b2eb-60b3997ed4f8-horizon-secret-key\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.461766 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/378be2ad-3335-409f-b2eb-60b3997ed4f8-combined-ca-bundle\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.461786 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4b203558-1aea-4672-871f-d2dca324a585-horizon-secret-key\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.468319 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b203558-1aea-4672-871f-d2dca324a585-logs\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.470827 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b203558-1aea-4672-871f-d2dca324a585-combined-ca-bundle\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.471050 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b203558-1aea-4672-871f-d2dca324a585-config-data\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.471228 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4b203558-1aea-4672-871f-d2dca324a585-horizon-secret-key\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.471991 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b203558-1aea-4672-871f-d2dca324a585-scripts\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.479360 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b203558-1aea-4672-871f-d2dca324a585-horizon-tls-certs\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.490790 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddkc4\" (UniqueName: \"kubernetes.io/projected/4b203558-1aea-4672-871f-d2dca324a585-kube-api-access-ddkc4\") pod \"horizon-6f6cd65c74-7vqb5\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.563197 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/378be2ad-3335-409f-b2eb-60b3997ed4f8-logs\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.563289 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvsbb\" (UniqueName: \"kubernetes.io/projected/378be2ad-3335-409f-b2eb-60b3997ed4f8-kube-api-access-qvsbb\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.563335 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/378be2ad-3335-409f-b2eb-60b3997ed4f8-horizon-tls-certs\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.563383 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/378be2ad-3335-409f-b2eb-60b3997ed4f8-config-data\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.563412 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/378be2ad-3335-409f-b2eb-60b3997ed4f8-horizon-secret-key\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.563437 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/378be2ad-3335-409f-b2eb-60b3997ed4f8-combined-ca-bundle\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.563489 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/378be2ad-3335-409f-b2eb-60b3997ed4f8-scripts\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.564488 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/378be2ad-3335-409f-b2eb-60b3997ed4f8-scripts\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.567191 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/378be2ad-3335-409f-b2eb-60b3997ed4f8-logs\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.570402 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/378be2ad-3335-409f-b2eb-60b3997ed4f8-config-data\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.574598 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/378be2ad-3335-409f-b2eb-60b3997ed4f8-horizon-secret-key\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.575085 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/378be2ad-3335-409f-b2eb-60b3997ed4f8-horizon-tls-certs\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.584773 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/378be2ad-3335-409f-b2eb-60b3997ed4f8-combined-ca-bundle\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.585962 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvsbb\" (UniqueName: \"kubernetes.io/projected/378be2ad-3335-409f-b2eb-60b3997ed4f8-kube-api-access-qvsbb\") pod \"horizon-67f45f8444-g8bbs\" (UID: \"378be2ad-3335-409f-b2eb-60b3997ed4f8\") " pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.610778 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:40:44 crc kubenswrapper[4624]: I1008 14:40:44.718615 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:40:45 crc kubenswrapper[4624]: I1008 14:40:45.242841 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:40:45 crc kubenswrapper[4624]: I1008 14:40:45.307686 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64d796cf9-v2x9k"] Oct 08 14:40:45 crc kubenswrapper[4624]: I1008 14:40:45.308419 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" podUID="e26e9ed3-4063-4daf-bdfc-84c096ce4569" containerName="dnsmasq-dns" containerID="cri-o://74c14307dd2c5bea336ff0746f37b2d818f97a2b87a521a88f58f2015a34829a" gracePeriod=10 Oct 08 14:40:45 crc kubenswrapper[4624]: I1008 14:40:45.654009 4624 generic.go:334] "Generic (PLEG): container finished" podID="d867582d-bc6b-4693-b2db-1d5a8142f844" containerID="386ea90a8c50cfec2dd3992a5e652a3eca289efa6de448db18b24b337cc9f503" exitCode=0 Oct 08 14:40:45 crc kubenswrapper[4624]: I1008 14:40:45.654368 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w2w8t" event={"ID":"d867582d-bc6b-4693-b2db-1d5a8142f844","Type":"ContainerDied","Data":"386ea90a8c50cfec2dd3992a5e652a3eca289efa6de448db18b24b337cc9f503"} Oct 08 14:40:45 crc kubenswrapper[4624]: I1008 14:40:45.677914 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" event={"ID":"e26e9ed3-4063-4daf-bdfc-84c096ce4569","Type":"ContainerDied","Data":"74c14307dd2c5bea336ff0746f37b2d818f97a2b87a521a88f58f2015a34829a"} Oct 08 14:40:45 crc kubenswrapper[4624]: I1008 14:40:45.687460 4624 generic.go:334] "Generic (PLEG): container finished" podID="e26e9ed3-4063-4daf-bdfc-84c096ce4569" containerID="74c14307dd2c5bea336ff0746f37b2d818f97a2b87a521a88f58f2015a34829a" exitCode=0 Oct 08 14:40:50 crc kubenswrapper[4624]: I1008 14:40:50.183010 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" podUID="e26e9ed3-4063-4daf-bdfc-84c096ce4569" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Oct 08 14:40:53 crc kubenswrapper[4624]: E1008 14:40:53.681699 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-horizon:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:40:53 crc kubenswrapper[4624]: E1008 14:40:53.682327 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-horizon:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:40:53 crc kubenswrapper[4624]: E1008 14:40:53.682487 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-horizon:b78cfc68a577b1553523c8a70a34e297,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc7h645h576h77h54ch5bch57fh54bh5b9h544h644h89h54h668h4h567h67ch56h5d7h58hc4h88h585h5h65dh58h689h4h7dh665h666h8fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vw77z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5bd55fb965-sh6wc_openstack(bd54784e-4e69-41ae-a216-2adc870ebc63): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:40:53 crc kubenswrapper[4624]: E1008 14:40:53.698864 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/podified-antelope-centos9/openstack-horizon:b78cfc68a577b1553523c8a70a34e297\\\"\"]" pod="openstack/horizon-5bd55fb965-sh6wc" podUID="bd54784e-4e69-41ae-a216-2adc870ebc63" Oct 08 14:41:00 crc kubenswrapper[4624]: I1008 14:41:00.182794 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" podUID="e26e9ed3-4063-4daf-bdfc-84c096ce4569" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Oct 08 14:41:05 crc kubenswrapper[4624]: I1008 14:41:05.184212 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" podUID="e26e9ed3-4063-4daf-bdfc-84c096ce4569" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Oct 08 14:41:05 crc kubenswrapper[4624]: I1008 14:41:05.184863 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:41:10 crc kubenswrapper[4624]: I1008 14:41:10.185143 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" podUID="e26e9ed3-4063-4daf-bdfc-84c096ce4569" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Oct 08 14:41:11 crc kubenswrapper[4624]: E1008 14:41:11.775141 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-glance-api:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:41:11 crc kubenswrapper[4624]: E1008 14:41:11.775208 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-glance-api:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:41:11 crc kubenswrapper[4624]: E1008 14:41:11.775421 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-glance-api:b78cfc68a577b1553523c8a70a34e297,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdrdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-t4www_openstack(7222761d-c17c-485d-a672-75d7921fbb20): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:41:11 crc kubenswrapper[4624]: E1008 14:41:11.776607 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-t4www" podUID="7222761d-c17c-485d-a672-75d7921fbb20" Oct 08 14:41:11 crc kubenswrapper[4624]: E1008 14:41:11.912062 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/podified-antelope-centos9/openstack-glance-api:b78cfc68a577b1553523c8a70a34e297\\\"\"" pod="openstack/glance-db-sync-t4www" podUID="7222761d-c17c-485d-a672-75d7921fbb20" Oct 08 14:41:13 crc kubenswrapper[4624]: E1008 14:41:13.931541 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-placement-api:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:41:13 crc kubenswrapper[4624]: E1008 14:41:13.931796 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-placement-api:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:41:13 crc kubenswrapper[4624]: E1008 14:41:13.931939 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-placement-api:b78cfc68a577b1553523c8a70a34e297,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cm49k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-xkc5w_openstack(a0efe065-02ed-472d-b560-6ddfcee095c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:41:13 crc kubenswrapper[4624]: E1008 14:41:13.933126 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-xkc5w" podUID="a0efe065-02ed-472d-b560-6ddfcee095c4" Oct 08 14:41:14 crc kubenswrapper[4624]: E1008 14:41:14.938056 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/podified-antelope-centos9/openstack-placement-api:b78cfc68a577b1553523c8a70a34e297\\\"\"" pod="openstack/placement-db-sync-xkc5w" podUID="a0efe065-02ed-472d-b560-6ddfcee095c4" Oct 08 14:41:15 crc kubenswrapper[4624]: I1008 14:41:15.187332 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" podUID="e26e9ed3-4063-4daf-bdfc-84c096ce4569" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Oct 08 14:41:20 crc kubenswrapper[4624]: I1008 14:41:20.187490 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" podUID="e26e9ed3-4063-4daf-bdfc-84c096ce4569" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Oct 08 14:41:20 crc kubenswrapper[4624]: E1008 14:41:20.552986 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-horizon:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:41:20 crc kubenswrapper[4624]: E1008 14:41:20.553077 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-horizon:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:41:20 crc kubenswrapper[4624]: E1008 14:41:20.553237 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-horizon:b78cfc68a577b1553523c8a70a34e297,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5fbh567h58dh5dch54bh56dh598h577h57bhbfh68dhc5h58h9bh5b8h694hbbh577h545h5dbhf6h659h699hcbhb8h97h64h6bh576h649h578h567q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qhv2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-b94dcd745-2cb6d_openstack(98ca78d1-e9ac-4556-99fa-f012b4461b9a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:41:20 crc kubenswrapper[4624]: E1008 14:41:20.556614 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/podified-antelope-centos9/openstack-horizon:b78cfc68a577b1553523c8a70a34e297\\\"\"]" pod="openstack/horizon-b94dcd745-2cb6d" podUID="98ca78d1-e9ac-4556-99fa-f012b4461b9a" Oct 08 14:41:20 crc kubenswrapper[4624]: E1008 14:41:20.566745 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-horizon:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:41:20 crc kubenswrapper[4624]: E1008 14:41:20.566821 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-horizon:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:41:20 crc kubenswrapper[4624]: E1008 14:41:20.567002 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-horizon:b78cfc68a577b1553523c8a70a34e297,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n685h585h6dh69hdchf4h5c7h568h4h56h5dfh67dh7bh66fh77h5f7h54ch79h64h677h58bh669h56dh54bh6fh64dh559h676h58dh596h695h58q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pl9hx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-755547f977-wlvph_openstack(2b7275cd-f4ee-4888-955e-abcce7337089): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:41:20 crc kubenswrapper[4624]: E1008 14:41:20.569278 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/podified-antelope-centos9/openstack-horizon:b78cfc68a577b1553523c8a70a34e297\\\"\"]" pod="openstack/horizon-755547f977-wlvph" podUID="2b7275cd-f4ee-4888-955e-abcce7337089" Oct 08 14:41:25 crc kubenswrapper[4624]: I1008 14:41:25.188394 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" podUID="e26e9ed3-4063-4daf-bdfc-84c096ce4569" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.620804 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.657275 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.658482 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.799330 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-config-data\") pod \"d867582d-bc6b-4693-b2db-1d5a8142f844\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.799383 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-credential-keys\") pod \"d867582d-bc6b-4693-b2db-1d5a8142f844\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.799470 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsg6g\" (UniqueName: \"kubernetes.io/projected/e26e9ed3-4063-4daf-bdfc-84c096ce4569-kube-api-access-tsg6g\") pod \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.799504 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-fernet-keys\") pod \"d867582d-bc6b-4693-b2db-1d5a8142f844\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.799543 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-scripts\") pod \"d867582d-bc6b-4693-b2db-1d5a8142f844\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.799581 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpxhw\" (UniqueName: \"kubernetes.io/projected/d867582d-bc6b-4693-b2db-1d5a8142f844-kube-api-access-jpxhw\") pod \"d867582d-bc6b-4693-b2db-1d5a8142f844\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.799603 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-combined-ca-bundle\") pod \"d867582d-bc6b-4693-b2db-1d5a8142f844\" (UID: \"d867582d-bc6b-4693-b2db-1d5a8142f844\") " Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.799622 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw77z\" (UniqueName: \"kubernetes.io/projected/bd54784e-4e69-41ae-a216-2adc870ebc63-kube-api-access-vw77z\") pod \"bd54784e-4e69-41ae-a216-2adc870ebc63\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.799697 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd54784e-4e69-41ae-a216-2adc870ebc63-scripts\") pod \"bd54784e-4e69-41ae-a216-2adc870ebc63\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.799727 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd54784e-4e69-41ae-a216-2adc870ebc63-config-data\") pod \"bd54784e-4e69-41ae-a216-2adc870ebc63\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.799751 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-config\") pod \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.799777 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd54784e-4e69-41ae-a216-2adc870ebc63-logs\") pod \"bd54784e-4e69-41ae-a216-2adc870ebc63\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.799798 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-ovsdbserver-nb\") pod \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.799825 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-ovsdbserver-sb\") pod \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.799854 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd54784e-4e69-41ae-a216-2adc870ebc63-horizon-secret-key\") pod \"bd54784e-4e69-41ae-a216-2adc870ebc63\" (UID: \"bd54784e-4e69-41ae-a216-2adc870ebc63\") " Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.799876 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-dns-svc\") pod \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\" (UID: \"e26e9ed3-4063-4daf-bdfc-84c096ce4569\") " Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.800477 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd54784e-4e69-41ae-a216-2adc870ebc63-scripts" (OuterVolumeSpecName: "scripts") pod "bd54784e-4e69-41ae-a216-2adc870ebc63" (UID: "bd54784e-4e69-41ae-a216-2adc870ebc63"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.806796 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e26e9ed3-4063-4daf-bdfc-84c096ce4569-kube-api-access-tsg6g" (OuterVolumeSpecName: "kube-api-access-tsg6g") pod "e26e9ed3-4063-4daf-bdfc-84c096ce4569" (UID: "e26e9ed3-4063-4daf-bdfc-84c096ce4569"). InnerVolumeSpecName "kube-api-access-tsg6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.806863 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd54784e-4e69-41ae-a216-2adc870ebc63-kube-api-access-vw77z" (OuterVolumeSpecName: "kube-api-access-vw77z") pod "bd54784e-4e69-41ae-a216-2adc870ebc63" (UID: "bd54784e-4e69-41ae-a216-2adc870ebc63"). InnerVolumeSpecName "kube-api-access-vw77z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.807171 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd54784e-4e69-41ae-a216-2adc870ebc63-logs" (OuterVolumeSpecName: "logs") pod "bd54784e-4e69-41ae-a216-2adc870ebc63" (UID: "bd54784e-4e69-41ae-a216-2adc870ebc63"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.807702 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd54784e-4e69-41ae-a216-2adc870ebc63-config-data" (OuterVolumeSpecName: "config-data") pod "bd54784e-4e69-41ae-a216-2adc870ebc63" (UID: "bd54784e-4e69-41ae-a216-2adc870ebc63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.811536 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d867582d-bc6b-4693-b2db-1d5a8142f844-kube-api-access-jpxhw" (OuterVolumeSpecName: "kube-api-access-jpxhw") pod "d867582d-bc6b-4693-b2db-1d5a8142f844" (UID: "d867582d-bc6b-4693-b2db-1d5a8142f844"). InnerVolumeSpecName "kube-api-access-jpxhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.820791 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d867582d-bc6b-4693-b2db-1d5a8142f844" (UID: "d867582d-bc6b-4693-b2db-1d5a8142f844"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.820840 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d867582d-bc6b-4693-b2db-1d5a8142f844" (UID: "d867582d-bc6b-4693-b2db-1d5a8142f844"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.821066 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-scripts" (OuterVolumeSpecName: "scripts") pod "d867582d-bc6b-4693-b2db-1d5a8142f844" (UID: "d867582d-bc6b-4693-b2db-1d5a8142f844"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.822433 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd54784e-4e69-41ae-a216-2adc870ebc63-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "bd54784e-4e69-41ae-a216-2adc870ebc63" (UID: "bd54784e-4e69-41ae-a216-2adc870ebc63"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.852233 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-config-data" (OuterVolumeSpecName: "config-data") pod "d867582d-bc6b-4693-b2db-1d5a8142f844" (UID: "d867582d-bc6b-4693-b2db-1d5a8142f844"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.866444 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d867582d-bc6b-4693-b2db-1d5a8142f844" (UID: "d867582d-bc6b-4693-b2db-1d5a8142f844"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.878092 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e26e9ed3-4063-4daf-bdfc-84c096ce4569" (UID: "e26e9ed3-4063-4daf-bdfc-84c096ce4569"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.883071 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-config" (OuterVolumeSpecName: "config") pod "e26e9ed3-4063-4daf-bdfc-84c096ce4569" (UID: "e26e9ed3-4063-4daf-bdfc-84c096ce4569"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.885284 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e26e9ed3-4063-4daf-bdfc-84c096ce4569" (UID: "e26e9ed3-4063-4daf-bdfc-84c096ce4569"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.894281 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e26e9ed3-4063-4daf-bdfc-84c096ce4569" (UID: "e26e9ed3-4063-4daf-bdfc-84c096ce4569"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.901461 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsg6g\" (UniqueName: \"kubernetes.io/projected/e26e9ed3-4063-4daf-bdfc-84c096ce4569-kube-api-access-tsg6g\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.901493 4624 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.901502 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.901512 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpxhw\" (UniqueName: \"kubernetes.io/projected/d867582d-bc6b-4693-b2db-1d5a8142f844-kube-api-access-jpxhw\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.901523 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.901537 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw77z\" (UniqueName: \"kubernetes.io/projected/bd54784e-4e69-41ae-a216-2adc870ebc63-kube-api-access-vw77z\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.901550 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd54784e-4e69-41ae-a216-2adc870ebc63-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.901560 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd54784e-4e69-41ae-a216-2adc870ebc63-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.901569 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.901577 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd54784e-4e69-41ae-a216-2adc870ebc63-logs\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.901584 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.901592 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.901601 4624 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd54784e-4e69-41ae-a216-2adc870ebc63-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.901608 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e26e9ed3-4063-4daf-bdfc-84c096ce4569-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.901616 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.901623 4624 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d867582d-bc6b-4693-b2db-1d5a8142f844-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:29 crc kubenswrapper[4624]: I1008 14:41:29.992306 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f6cd65c74-7vqb5"] Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.076375 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w2w8t" Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.076393 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w2w8t" event={"ID":"d867582d-bc6b-4693-b2db-1d5a8142f844","Type":"ContainerDied","Data":"e29731c96e836cc693de623a8d4531756ee43d5c08c78ac24eb3a8c15ecc2759"} Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.076839 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e29731c96e836cc693de623a8d4531756ee43d5c08c78ac24eb3a8c15ecc2759" Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.090699 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" event={"ID":"e26e9ed3-4063-4daf-bdfc-84c096ce4569","Type":"ContainerDied","Data":"9f12867599305419b5b6217bb0b1f448ce79b3078cb5dddabe91f4a39cc91937"} Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.090741 4624 scope.go:117] "RemoveContainer" containerID="74c14307dd2c5bea336ff0746f37b2d818f97a2b87a521a88f58f2015a34829a" Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.090868 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.098787 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bd55fb965-sh6wc" event={"ID":"bd54784e-4e69-41ae-a216-2adc870ebc63","Type":"ContainerDied","Data":"20619dec043725e959beb5f139dfd8f868a20d5a99b59a06b08d90db15ee20c8"} Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.099146 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bd55fb965-sh6wc" Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.128003 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64d796cf9-v2x9k"] Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.138490 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64d796cf9-v2x9k"] Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.189775 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-64d796cf9-v2x9k" podUID="e26e9ed3-4063-4daf-bdfc-84c096ce4569" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.196657 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bd55fb965-sh6wc"] Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.208413 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5bd55fb965-sh6wc"] Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.868854 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-w2w8t"] Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.881708 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-w2w8t"] Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.938439 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tg77p"] Oct 08 14:41:30 crc kubenswrapper[4624]: E1008 14:41:30.938866 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e26e9ed3-4063-4daf-bdfc-84c096ce4569" containerName="dnsmasq-dns" Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.938887 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e26e9ed3-4063-4daf-bdfc-84c096ce4569" containerName="dnsmasq-dns" Oct 08 14:41:30 crc kubenswrapper[4624]: E1008 14:41:30.938906 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d867582d-bc6b-4693-b2db-1d5a8142f844" containerName="keystone-bootstrap" Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.938914 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d867582d-bc6b-4693-b2db-1d5a8142f844" containerName="keystone-bootstrap" Oct 08 14:41:30 crc kubenswrapper[4624]: E1008 14:41:30.938939 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e26e9ed3-4063-4daf-bdfc-84c096ce4569" containerName="init" Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.938946 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e26e9ed3-4063-4daf-bdfc-84c096ce4569" containerName="init" Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.939133 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d867582d-bc6b-4693-b2db-1d5a8142f844" containerName="keystone-bootstrap" Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.939164 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e26e9ed3-4063-4daf-bdfc-84c096ce4569" containerName="dnsmasq-dns" Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.940591 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.953565 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.953996 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.954117 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.954291 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r9xgf" Oct 08 14:41:30 crc kubenswrapper[4624]: I1008 14:41:30.959552 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tg77p"] Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.073289 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-combined-ca-bundle\") pod \"keystone-bootstrap-tg77p\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.073417 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-scripts\") pod \"keystone-bootstrap-tg77p\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.073568 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4vd4\" (UniqueName: \"kubernetes.io/projected/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-kube-api-access-w4vd4\") pod \"keystone-bootstrap-tg77p\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.073793 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-config-data\") pod \"keystone-bootstrap-tg77p\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.073927 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-credential-keys\") pod \"keystone-bootstrap-tg77p\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.073974 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-fernet-keys\") pod \"keystone-bootstrap-tg77p\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.175334 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-credential-keys\") pod \"keystone-bootstrap-tg77p\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.175386 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-fernet-keys\") pod \"keystone-bootstrap-tg77p\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.175430 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-combined-ca-bundle\") pod \"keystone-bootstrap-tg77p\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.175464 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-scripts\") pod \"keystone-bootstrap-tg77p\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.175545 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4vd4\" (UniqueName: \"kubernetes.io/projected/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-kube-api-access-w4vd4\") pod \"keystone-bootstrap-tg77p\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.175613 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-config-data\") pod \"keystone-bootstrap-tg77p\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.182060 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-scripts\") pod \"keystone-bootstrap-tg77p\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.182541 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-credential-keys\") pod \"keystone-bootstrap-tg77p\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.183097 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-fernet-keys\") pod \"keystone-bootstrap-tg77p\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.188431 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-combined-ca-bundle\") pod \"keystone-bootstrap-tg77p\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.192399 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-config-data\") pod \"keystone-bootstrap-tg77p\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.198112 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4vd4\" (UniqueName: \"kubernetes.io/projected/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-kube-api-access-w4vd4\") pod \"keystone-bootstrap-tg77p\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.298919 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.477812 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd54784e-4e69-41ae-a216-2adc870ebc63" path="/var/lib/kubelet/pods/bd54784e-4e69-41ae-a216-2adc870ebc63/volumes" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.478334 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d867582d-bc6b-4693-b2db-1d5a8142f844" path="/var/lib/kubelet/pods/d867582d-bc6b-4693-b2db-1d5a8142f844/volumes" Oct 08 14:41:31 crc kubenswrapper[4624]: I1008 14:41:31.479259 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e26e9ed3-4063-4daf-bdfc-84c096ce4569" path="/var/lib/kubelet/pods/e26e9ed3-4063-4daf-bdfc-84c096ce4569/volumes" Oct 08 14:41:35 crc kubenswrapper[4624]: E1008 14:41:35.387709 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-ceilometer-central:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:41:35 crc kubenswrapper[4624]: E1008 14:41:35.388071 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-ceilometer-central:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:41:35 crc kubenswrapper[4624]: E1008 14:41:35.388458 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-ceilometer-central:b78cfc68a577b1553523c8a70a34e297,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7bh557h8h64h64ch5c9hbch594h65fh56bh67hc9h587h97h67hcfhc7h5b5h67ch7bh59ch54fh5d9h7fhf9h54h8dh567h9dhc8hc8h675q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6x59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(4bf057e0-e090-498a-bb8b-32835373555c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:41:35 crc kubenswrapper[4624]: E1008 14:41:35.752744 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-heat-engine:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:41:35 crc kubenswrapper[4624]: E1008 14:41:35.753107 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-heat-engine:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:41:35 crc kubenswrapper[4624]: E1008 14:41:35.753239 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-heat-engine:b78cfc68a577b1553523c8a70a34e297,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b9bz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-f6sz5_openstack(265c0058-98f4-4bcd-b413-8e3633ab56cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:41:35 crc kubenswrapper[4624]: E1008 14:41:35.755004 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-f6sz5" podUID="265c0058-98f4-4bcd-b413-8e3633ab56cd" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.822492 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.840723 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-755547f977-wlvph" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.851478 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhv2w\" (UniqueName: \"kubernetes.io/projected/98ca78d1-e9ac-4556-99fa-f012b4461b9a-kube-api-access-qhv2w\") pod \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.851552 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98ca78d1-e9ac-4556-99fa-f012b4461b9a-config-data\") pod \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.851589 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98ca78d1-e9ac-4556-99fa-f012b4461b9a-scripts\") pod \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.851616 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl9hx\" (UniqueName: \"kubernetes.io/projected/2b7275cd-f4ee-4888-955e-abcce7337089-kube-api-access-pl9hx\") pod \"2b7275cd-f4ee-4888-955e-abcce7337089\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.851657 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98ca78d1-e9ac-4556-99fa-f012b4461b9a-logs\") pod \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.851682 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b7275cd-f4ee-4888-955e-abcce7337089-logs\") pod \"2b7275cd-f4ee-4888-955e-abcce7337089\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.851723 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b7275cd-f4ee-4888-955e-abcce7337089-scripts\") pod \"2b7275cd-f4ee-4888-955e-abcce7337089\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.851745 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b7275cd-f4ee-4888-955e-abcce7337089-horizon-secret-key\") pod \"2b7275cd-f4ee-4888-955e-abcce7337089\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.851784 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b7275cd-f4ee-4888-955e-abcce7337089-config-data\") pod \"2b7275cd-f4ee-4888-955e-abcce7337089\" (UID: \"2b7275cd-f4ee-4888-955e-abcce7337089\") " Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.851807 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/98ca78d1-e9ac-4556-99fa-f012b4461b9a-horizon-secret-key\") pod \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\" (UID: \"98ca78d1-e9ac-4556-99fa-f012b4461b9a\") " Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.852056 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98ca78d1-e9ac-4556-99fa-f012b4461b9a-logs" (OuterVolumeSpecName: "logs") pod "98ca78d1-e9ac-4556-99fa-f012b4461b9a" (UID: "98ca78d1-e9ac-4556-99fa-f012b4461b9a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.854039 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b7275cd-f4ee-4888-955e-abcce7337089-logs" (OuterVolumeSpecName: "logs") pod "2b7275cd-f4ee-4888-955e-abcce7337089" (UID: "2b7275cd-f4ee-4888-955e-abcce7337089"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.854287 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b7275cd-f4ee-4888-955e-abcce7337089-scripts" (OuterVolumeSpecName: "scripts") pod "2b7275cd-f4ee-4888-955e-abcce7337089" (UID: "2b7275cd-f4ee-4888-955e-abcce7337089"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.855068 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b7275cd-f4ee-4888-955e-abcce7337089-config-data" (OuterVolumeSpecName: "config-data") pod "2b7275cd-f4ee-4888-955e-abcce7337089" (UID: "2b7275cd-f4ee-4888-955e-abcce7337089"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.855253 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98ca78d1-e9ac-4556-99fa-f012b4461b9a-scripts" (OuterVolumeSpecName: "scripts") pod "98ca78d1-e9ac-4556-99fa-f012b4461b9a" (UID: "98ca78d1-e9ac-4556-99fa-f012b4461b9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.855334 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98ca78d1-e9ac-4556-99fa-f012b4461b9a-config-data" (OuterVolumeSpecName: "config-data") pod "98ca78d1-e9ac-4556-99fa-f012b4461b9a" (UID: "98ca78d1-e9ac-4556-99fa-f012b4461b9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.858689 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7275cd-f4ee-4888-955e-abcce7337089-kube-api-access-pl9hx" (OuterVolumeSpecName: "kube-api-access-pl9hx") pod "2b7275cd-f4ee-4888-955e-abcce7337089" (UID: "2b7275cd-f4ee-4888-955e-abcce7337089"). InnerVolumeSpecName "kube-api-access-pl9hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.858963 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98ca78d1-e9ac-4556-99fa-f012b4461b9a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "98ca78d1-e9ac-4556-99fa-f012b4461b9a" (UID: "98ca78d1-e9ac-4556-99fa-f012b4461b9a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.861755 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7275cd-f4ee-4888-955e-abcce7337089-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2b7275cd-f4ee-4888-955e-abcce7337089" (UID: "2b7275cd-f4ee-4888-955e-abcce7337089"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.862319 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98ca78d1-e9ac-4556-99fa-f012b4461b9a-kube-api-access-qhv2w" (OuterVolumeSpecName: "kube-api-access-qhv2w") pod "98ca78d1-e9ac-4556-99fa-f012b4461b9a" (UID: "98ca78d1-e9ac-4556-99fa-f012b4461b9a"). InnerVolumeSpecName "kube-api-access-qhv2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.954057 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhv2w\" (UniqueName: \"kubernetes.io/projected/98ca78d1-e9ac-4556-99fa-f012b4461b9a-kube-api-access-qhv2w\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.954128 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98ca78d1-e9ac-4556-99fa-f012b4461b9a-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.954146 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98ca78d1-e9ac-4556-99fa-f012b4461b9a-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.954183 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl9hx\" (UniqueName: \"kubernetes.io/projected/2b7275cd-f4ee-4888-955e-abcce7337089-kube-api-access-pl9hx\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.954213 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98ca78d1-e9ac-4556-99fa-f012b4461b9a-logs\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.954224 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b7275cd-f4ee-4888-955e-abcce7337089-logs\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.954234 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b7275cd-f4ee-4888-955e-abcce7337089-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.954244 4624 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b7275cd-f4ee-4888-955e-abcce7337089-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.954252 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b7275cd-f4ee-4888-955e-abcce7337089-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:35 crc kubenswrapper[4624]: I1008 14:41:35.954259 4624 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/98ca78d1-e9ac-4556-99fa-f012b4461b9a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:36 crc kubenswrapper[4624]: I1008 14:41:36.148165 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-755547f977-wlvph" Oct 08 14:41:36 crc kubenswrapper[4624]: I1008 14:41:36.155754 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-755547f977-wlvph" event={"ID":"2b7275cd-f4ee-4888-955e-abcce7337089","Type":"ContainerDied","Data":"87811cf100eca82be5e359e186aab31026d4432a00f82c38513d142b151c184e"} Oct 08 14:41:36 crc kubenswrapper[4624]: I1008 14:41:36.157051 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b94dcd745-2cb6d" Oct 08 14:41:36 crc kubenswrapper[4624]: I1008 14:41:36.157076 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b94dcd745-2cb6d" event={"ID":"98ca78d1-e9ac-4556-99fa-f012b4461b9a","Type":"ContainerDied","Data":"7f73f7cbfd403eb23a515c91dba54168bd39509de2c2b3bd1a8845076f6a8860"} Oct 08 14:41:36 crc kubenswrapper[4624]: E1008 14:41:36.159320 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/podified-antelope-centos9/openstack-heat-engine:b78cfc68a577b1553523c8a70a34e297\\\"\"" pod="openstack/heat-db-sync-f6sz5" podUID="265c0058-98f4-4bcd-b413-8e3633ab56cd" Oct 08 14:41:36 crc kubenswrapper[4624]: I1008 14:41:36.258829 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b94dcd745-2cb6d"] Oct 08 14:41:36 crc kubenswrapper[4624]: I1008 14:41:36.276560 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-b94dcd745-2cb6d"] Oct 08 14:41:36 crc kubenswrapper[4624]: I1008 14:41:36.291326 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-755547f977-wlvph"] Oct 08 14:41:36 crc kubenswrapper[4624]: I1008 14:41:36.297449 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-755547f977-wlvph"] Oct 08 14:41:36 crc kubenswrapper[4624]: I1008 14:41:36.987158 4624 scope.go:117] "RemoveContainer" containerID="990909d108f36fa77dd328bbbffcca7a978fc009e2a0fc6a375f5af5079b7911" Oct 08 14:41:37 crc kubenswrapper[4624]: E1008 14:41:37.022687 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-cinder-api:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:41:37 crc kubenswrapper[4624]: E1008 14:41:37.022743 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-cinder-api:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:41:37 crc kubenswrapper[4624]: E1008 14:41:37.022859 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-cinder-api:b78cfc68a577b1553523c8a70a34e297,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bqvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-hv6dc_openstack(cd53e103-7b25-4f61-a0f4-675ace133ab7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:41:37 crc kubenswrapper[4624]: E1008 14:41:37.024298 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-hv6dc" podUID="cd53e103-7b25-4f61-a0f4-675ace133ab7" Oct 08 14:41:37 crc kubenswrapper[4624]: I1008 14:41:37.187618 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6cd65c74-7vqb5" event={"ID":"4b203558-1aea-4672-871f-d2dca324a585","Type":"ContainerStarted","Data":"25f94ae97b15469d5ded389dc2d8affffb353cd234649fb67ffa2e9b39041e98"} Oct 08 14:41:37 crc kubenswrapper[4624]: E1008 14:41:37.214699 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/podified-antelope-centos9/openstack-cinder-api:b78cfc68a577b1553523c8a70a34e297\\\"\"" pod="openstack/cinder-db-sync-hv6dc" podUID="cd53e103-7b25-4f61-a0f4-675ace133ab7" Oct 08 14:41:37 crc kubenswrapper[4624]: I1008 14:41:37.498668 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b7275cd-f4ee-4888-955e-abcce7337089" path="/var/lib/kubelet/pods/2b7275cd-f4ee-4888-955e-abcce7337089/volumes" Oct 08 14:41:37 crc kubenswrapper[4624]: I1008 14:41:37.502808 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98ca78d1-e9ac-4556-99fa-f012b4461b9a" path="/var/lib/kubelet/pods/98ca78d1-e9ac-4556-99fa-f012b4461b9a/volumes" Oct 08 14:41:37 crc kubenswrapper[4624]: W1008 14:41:37.507823 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod378be2ad_3335_409f_b2eb_60b3997ed4f8.slice/crio-8f3748ac8b31f0880da671539869546875c1cb61f4432f6c52334668a3e3ca63 WatchSource:0}: Error finding container 8f3748ac8b31f0880da671539869546875c1cb61f4432f6c52334668a3e3ca63: Status 404 returned error can't find the container with id 8f3748ac8b31f0880da671539869546875c1cb61f4432f6c52334668a3e3ca63 Oct 08 14:41:37 crc kubenswrapper[4624]: I1008 14:41:37.507934 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67f45f8444-g8bbs"] Oct 08 14:41:37 crc kubenswrapper[4624]: I1008 14:41:37.737600 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tg77p"] Oct 08 14:41:38 crc kubenswrapper[4624]: W1008 14:41:38.117229 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7b93b6b_3915_45f1_9f70_b186b5e7ed31.slice/crio-c754b9d5d35a0485d2caaa304b14a722a2e8f017c36481774ca9917ae1a4d06d WatchSource:0}: Error finding container c754b9d5d35a0485d2caaa304b14a722a2e8f017c36481774ca9917ae1a4d06d: Status 404 returned error can't find the container with id c754b9d5d35a0485d2caaa304b14a722a2e8f017c36481774ca9917ae1a4d06d Oct 08 14:41:38 crc kubenswrapper[4624]: I1008 14:41:38.270189 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tg77p" event={"ID":"c7b93b6b-3915-45f1-9f70-b186b5e7ed31","Type":"ContainerStarted","Data":"c754b9d5d35a0485d2caaa304b14a722a2e8f017c36481774ca9917ae1a4d06d"} Oct 08 14:41:38 crc kubenswrapper[4624]: I1008 14:41:38.273379 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67f45f8444-g8bbs" event={"ID":"378be2ad-3335-409f-b2eb-60b3997ed4f8","Type":"ContainerStarted","Data":"c309778652d0ef5b70dc0ec3ebb17d16dd50e4af9086c25501b0a62efbbd2531"} Oct 08 14:41:38 crc kubenswrapper[4624]: I1008 14:41:38.273427 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67f45f8444-g8bbs" event={"ID":"378be2ad-3335-409f-b2eb-60b3997ed4f8","Type":"ContainerStarted","Data":"8f3748ac8b31f0880da671539869546875c1cb61f4432f6c52334668a3e3ca63"} Oct 08 14:41:38 crc kubenswrapper[4624]: I1008 14:41:38.274804 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-t4www" event={"ID":"7222761d-c17c-485d-a672-75d7921fbb20","Type":"ContainerStarted","Data":"cbeef4aaa873d131e498e5fb3c1570eac0ea1b83b1be4af86ca4727e8e2f03e3"} Oct 08 14:41:38 crc kubenswrapper[4624]: I1008 14:41:38.277029 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lfgwx" event={"ID":"1d554826-ab8b-40f6-9be4-ad2949010968","Type":"ContainerStarted","Data":"9a1f22539bbc52f0ddbbfd0cc3714ee557621e3905749c9563270ac682934197"} Oct 08 14:41:38 crc kubenswrapper[4624]: I1008 14:41:38.284195 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xkc5w" event={"ID":"a0efe065-02ed-472d-b560-6ddfcee095c4","Type":"ContainerStarted","Data":"b74a347620894560b1e82eccb1b9bbc6d9147ead99e26945052b765fd61fe8bb"} Oct 08 14:41:38 crc kubenswrapper[4624]: I1008 14:41:38.307322 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6cd65c74-7vqb5" event={"ID":"4b203558-1aea-4672-871f-d2dca324a585","Type":"ContainerStarted","Data":"6ec33217cf79e738ccd4fb8b5dfdb26af9d5223b52e13ca9693c35de2207761b"} Oct 08 14:41:38 crc kubenswrapper[4624]: I1008 14:41:38.310002 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6cd65c74-7vqb5" event={"ID":"4b203558-1aea-4672-871f-d2dca324a585","Type":"ContainerStarted","Data":"da719f698a783f03a9aeca3f8634dbc83f874d6792863ed431947ce50e36881e"} Oct 08 14:41:38 crc kubenswrapper[4624]: I1008 14:41:38.328468 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-lfgwx" podStartSLOduration=3.518582108 podStartE2EDuration="1m4.328449448s" podCreationTimestamp="2025-10-08 14:40:34 +0000 UTC" firstStartedPulling="2025-10-08 14:40:36.262059276 +0000 UTC m=+1061.412994353" lastFinishedPulling="2025-10-08 14:41:37.071926616 +0000 UTC m=+1122.222861693" observedRunningTime="2025-10-08 14:41:38.32496892 +0000 UTC m=+1123.475903997" watchObservedRunningTime="2025-10-08 14:41:38.328449448 +0000 UTC m=+1123.479384525" Oct 08 14:41:38 crc kubenswrapper[4624]: I1008 14:41:38.329609 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-t4www" podStartSLOduration=3.221374283 podStartE2EDuration="1m9.329604437s" podCreationTimestamp="2025-10-08 14:40:29 +0000 UTC" firstStartedPulling="2025-10-08 14:40:31.029695749 +0000 UTC m=+1056.180630826" lastFinishedPulling="2025-10-08 14:41:37.137925903 +0000 UTC m=+1122.288860980" observedRunningTime="2025-10-08 14:41:38.301445346 +0000 UTC m=+1123.452380423" watchObservedRunningTime="2025-10-08 14:41:38.329604437 +0000 UTC m=+1123.480539514" Oct 08 14:41:38 crc kubenswrapper[4624]: I1008 14:41:38.352209 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-xkc5w" podStartSLOduration=3.799365484 podStartE2EDuration="1m4.352191708s" podCreationTimestamp="2025-10-08 14:40:34 +0000 UTC" firstStartedPulling="2025-10-08 14:40:36.62910541 +0000 UTC m=+1061.780040487" lastFinishedPulling="2025-10-08 14:41:37.181931634 +0000 UTC m=+1122.332866711" observedRunningTime="2025-10-08 14:41:38.348107855 +0000 UTC m=+1123.499042932" watchObservedRunningTime="2025-10-08 14:41:38.352191708 +0000 UTC m=+1123.503126785" Oct 08 14:41:38 crc kubenswrapper[4624]: I1008 14:41:38.384179 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6f6cd65c74-7vqb5" podStartSLOduration=54.087557984 podStartE2EDuration="54.384160025s" podCreationTimestamp="2025-10-08 14:40:44 +0000 UTC" firstStartedPulling="2025-10-08 14:41:36.987386841 +0000 UTC m=+1122.138321918" lastFinishedPulling="2025-10-08 14:41:37.283988882 +0000 UTC m=+1122.434923959" observedRunningTime="2025-10-08 14:41:38.380062682 +0000 UTC m=+1123.530997759" watchObservedRunningTime="2025-10-08 14:41:38.384160025 +0000 UTC m=+1123.535095102" Oct 08 14:41:39 crc kubenswrapper[4624]: I1008 14:41:39.321458 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67f45f8444-g8bbs" event={"ID":"378be2ad-3335-409f-b2eb-60b3997ed4f8","Type":"ContainerStarted","Data":"f9f47f6fddd5455b79c7a8df01c2e3793521a20e5f22d936d7fe100acfb88684"} Oct 08 14:41:39 crc kubenswrapper[4624]: I1008 14:41:39.325254 4624 generic.go:334] "Generic (PLEG): container finished" podID="791f20b1-069c-4d9d-b8ed-eb1330a191f8" containerID="33e4a671d890907137a641308f4b91d42becd83533d10f46fe992a0b7073f5be" exitCode=0 Oct 08 14:41:39 crc kubenswrapper[4624]: I1008 14:41:39.325300 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z5qqs" event={"ID":"791f20b1-069c-4d9d-b8ed-eb1330a191f8","Type":"ContainerDied","Data":"33e4a671d890907137a641308f4b91d42becd83533d10f46fe992a0b7073f5be"} Oct 08 14:41:39 crc kubenswrapper[4624]: I1008 14:41:39.327364 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf057e0-e090-498a-bb8b-32835373555c","Type":"ContainerStarted","Data":"3a2a76d9a38b0933057153192f3355b4d87f0d49f14deab96b6422411aea652b"} Oct 08 14:41:39 crc kubenswrapper[4624]: I1008 14:41:39.330260 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tg77p" event={"ID":"c7b93b6b-3915-45f1-9f70-b186b5e7ed31","Type":"ContainerStarted","Data":"ae360f24965e6b88c9e5bf73133b74fcc49d6e8fa04ad5e3ccea87515c728cdd"} Oct 08 14:41:39 crc kubenswrapper[4624]: I1008 14:41:39.352557 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-67f45f8444-g8bbs" podStartSLOduration=55.352534821 podStartE2EDuration="55.352534821s" podCreationTimestamp="2025-10-08 14:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:41:39.348593061 +0000 UTC m=+1124.499528138" watchObservedRunningTime="2025-10-08 14:41:39.352534821 +0000 UTC m=+1124.503469898" Oct 08 14:41:39 crc kubenswrapper[4624]: I1008 14:41:39.392919 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tg77p" podStartSLOduration=9.39290334 podStartE2EDuration="9.39290334s" podCreationTimestamp="2025-10-08 14:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:41:39.388351335 +0000 UTC m=+1124.539286432" watchObservedRunningTime="2025-10-08 14:41:39.39290334 +0000 UTC m=+1124.543838417" Oct 08 14:41:40 crc kubenswrapper[4624]: I1008 14:41:40.343259 4624 generic.go:334] "Generic (PLEG): container finished" podID="a0efe065-02ed-472d-b560-6ddfcee095c4" containerID="b74a347620894560b1e82eccb1b9bbc6d9147ead99e26945052b765fd61fe8bb" exitCode=0 Oct 08 14:41:40 crc kubenswrapper[4624]: I1008 14:41:40.343346 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xkc5w" event={"ID":"a0efe065-02ed-472d-b560-6ddfcee095c4","Type":"ContainerDied","Data":"b74a347620894560b1e82eccb1b9bbc6d9147ead99e26945052b765fd61fe8bb"} Oct 08 14:41:40 crc kubenswrapper[4624]: I1008 14:41:40.827059 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z5qqs" Oct 08 14:41:40 crc kubenswrapper[4624]: I1008 14:41:40.880845 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791f20b1-069c-4d9d-b8ed-eb1330a191f8-combined-ca-bundle\") pod \"791f20b1-069c-4d9d-b8ed-eb1330a191f8\" (UID: \"791f20b1-069c-4d9d-b8ed-eb1330a191f8\") " Oct 08 14:41:40 crc kubenswrapper[4624]: I1008 14:41:40.881074 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/791f20b1-069c-4d9d-b8ed-eb1330a191f8-config\") pod \"791f20b1-069c-4d9d-b8ed-eb1330a191f8\" (UID: \"791f20b1-069c-4d9d-b8ed-eb1330a191f8\") " Oct 08 14:41:40 crc kubenswrapper[4624]: I1008 14:41:40.881183 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdrpx\" (UniqueName: \"kubernetes.io/projected/791f20b1-069c-4d9d-b8ed-eb1330a191f8-kube-api-access-xdrpx\") pod \"791f20b1-069c-4d9d-b8ed-eb1330a191f8\" (UID: \"791f20b1-069c-4d9d-b8ed-eb1330a191f8\") " Oct 08 14:41:40 crc kubenswrapper[4624]: I1008 14:41:40.892881 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/791f20b1-069c-4d9d-b8ed-eb1330a191f8-kube-api-access-xdrpx" (OuterVolumeSpecName: "kube-api-access-xdrpx") pod "791f20b1-069c-4d9d-b8ed-eb1330a191f8" (UID: "791f20b1-069c-4d9d-b8ed-eb1330a191f8"). InnerVolumeSpecName "kube-api-access-xdrpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:41:40 crc kubenswrapper[4624]: I1008 14:41:40.945784 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/791f20b1-069c-4d9d-b8ed-eb1330a191f8-config" (OuterVolumeSpecName: "config") pod "791f20b1-069c-4d9d-b8ed-eb1330a191f8" (UID: "791f20b1-069c-4d9d-b8ed-eb1330a191f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:40 crc kubenswrapper[4624]: I1008 14:41:40.946690 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/791f20b1-069c-4d9d-b8ed-eb1330a191f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "791f20b1-069c-4d9d-b8ed-eb1330a191f8" (UID: "791f20b1-069c-4d9d-b8ed-eb1330a191f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:40 crc kubenswrapper[4624]: I1008 14:41:40.983151 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791f20b1-069c-4d9d-b8ed-eb1330a191f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:40 crc kubenswrapper[4624]: I1008 14:41:40.983191 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/791f20b1-069c-4d9d-b8ed-eb1330a191f8-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:40 crc kubenswrapper[4624]: I1008 14:41:40.983204 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdrpx\" (UniqueName: \"kubernetes.io/projected/791f20b1-069c-4d9d-b8ed-eb1330a191f8-kube-api-access-xdrpx\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.353117 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z5qqs" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.353111 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z5qqs" event={"ID":"791f20b1-069c-4d9d-b8ed-eb1330a191f8","Type":"ContainerDied","Data":"2f1d179a42df412d4312bc8b76b3601cf30c752dc540ce20776bf98bd85733e8"} Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.353174 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f1d179a42df412d4312bc8b76b3601cf30c752dc540ce20776bf98bd85733e8" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.651169 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85bddccb77-p452h"] Oct 08 14:41:41 crc kubenswrapper[4624]: E1008 14:41:41.651523 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="791f20b1-069c-4d9d-b8ed-eb1330a191f8" containerName="neutron-db-sync" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.651539 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="791f20b1-069c-4d9d-b8ed-eb1330a191f8" containerName="neutron-db-sync" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.651704 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="791f20b1-069c-4d9d-b8ed-eb1330a191f8" containerName="neutron-db-sync" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.662811 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.670109 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85bddccb77-p452h"] Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.701693 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-ovsdbserver-nb\") pod \"dnsmasq-dns-85bddccb77-p452h\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.701745 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-dns-swift-storage-0\") pod \"dnsmasq-dns-85bddccb77-p452h\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.702896 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9np4c\" (UniqueName: \"kubernetes.io/projected/622ca141-49de-4415-a26b-8ea7ddd47209-kube-api-access-9np4c\") pod \"dnsmasq-dns-85bddccb77-p452h\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.703016 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-dns-svc\") pod \"dnsmasq-dns-85bddccb77-p452h\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.703068 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-config\") pod \"dnsmasq-dns-85bddccb77-p452h\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.703096 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-ovsdbserver-sb\") pod \"dnsmasq-dns-85bddccb77-p452h\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.724823 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-86d56d5f7b-2jwqz"] Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.726186 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.733744 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-krwdj" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.734036 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.734292 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.734861 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.755117 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86d56d5f7b-2jwqz"] Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.804933 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6275g\" (UniqueName: \"kubernetes.io/projected/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-kube-api-access-6275g\") pod \"neutron-86d56d5f7b-2jwqz\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.805003 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9np4c\" (UniqueName: \"kubernetes.io/projected/622ca141-49de-4415-a26b-8ea7ddd47209-kube-api-access-9np4c\") pod \"dnsmasq-dns-85bddccb77-p452h\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.805068 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-httpd-config\") pod \"neutron-86d56d5f7b-2jwqz\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.805097 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-dns-svc\") pod \"dnsmasq-dns-85bddccb77-p452h\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.805119 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-config\") pod \"dnsmasq-dns-85bddccb77-p452h\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.805139 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-ovsdbserver-sb\") pod \"dnsmasq-dns-85bddccb77-p452h\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.805181 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-config\") pod \"neutron-86d56d5f7b-2jwqz\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.805206 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-combined-ca-bundle\") pod \"neutron-86d56d5f7b-2jwqz\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.805242 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-ovsdbserver-nb\") pod \"dnsmasq-dns-85bddccb77-p452h\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.805267 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-dns-swift-storage-0\") pod \"dnsmasq-dns-85bddccb77-p452h\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.806253 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-dns-svc\") pod \"dnsmasq-dns-85bddccb77-p452h\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.806780 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-ovsdbserver-sb\") pod \"dnsmasq-dns-85bddccb77-p452h\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.806924 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-config\") pod \"dnsmasq-dns-85bddccb77-p452h\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.807020 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-ovndb-tls-certs\") pod \"neutron-86d56d5f7b-2jwqz\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.807405 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-dns-swift-storage-0\") pod \"dnsmasq-dns-85bddccb77-p452h\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.807658 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-ovsdbserver-nb\") pod \"dnsmasq-dns-85bddccb77-p452h\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.829137 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9np4c\" (UniqueName: \"kubernetes.io/projected/622ca141-49de-4415-a26b-8ea7ddd47209-kube-api-access-9np4c\") pod \"dnsmasq-dns-85bddccb77-p452h\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.911582 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-httpd-config\") pod \"neutron-86d56d5f7b-2jwqz\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.911718 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-config\") pod \"neutron-86d56d5f7b-2jwqz\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.911747 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-combined-ca-bundle\") pod \"neutron-86d56d5f7b-2jwqz\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.911857 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-ovndb-tls-certs\") pod \"neutron-86d56d5f7b-2jwqz\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.911894 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6275g\" (UniqueName: \"kubernetes.io/projected/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-kube-api-access-6275g\") pod \"neutron-86d56d5f7b-2jwqz\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.917588 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-httpd-config\") pod \"neutron-86d56d5f7b-2jwqz\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.918116 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-ovndb-tls-certs\") pod \"neutron-86d56d5f7b-2jwqz\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.923601 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-combined-ca-bundle\") pod \"neutron-86d56d5f7b-2jwqz\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.925148 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-config\") pod \"neutron-86d56d5f7b-2jwqz\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:41:41 crc kubenswrapper[4624]: I1008 14:41:41.940249 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6275g\" (UniqueName: \"kubernetes.io/projected/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-kube-api-access-6275g\") pod \"neutron-86d56d5f7b-2jwqz\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:41:42 crc kubenswrapper[4624]: I1008 14:41:42.004709 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:42 crc kubenswrapper[4624]: I1008 14:41:42.058064 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:41:42 crc kubenswrapper[4624]: I1008 14:41:42.377395 4624 generic.go:334] "Generic (PLEG): container finished" podID="1d554826-ab8b-40f6-9be4-ad2949010968" containerID="9a1f22539bbc52f0ddbbfd0cc3714ee557621e3905749c9563270ac682934197" exitCode=0 Oct 08 14:41:42 crc kubenswrapper[4624]: I1008 14:41:42.377450 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lfgwx" event={"ID":"1d554826-ab8b-40f6-9be4-ad2949010968","Type":"ContainerDied","Data":"9a1f22539bbc52f0ddbbfd0cc3714ee557621e3905749c9563270ac682934197"} Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.153748 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lfgwx" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.164273 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xkc5w" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.260182 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d554826-ab8b-40f6-9be4-ad2949010968-combined-ca-bundle\") pod \"1d554826-ab8b-40f6-9be4-ad2949010968\" (UID: \"1d554826-ab8b-40f6-9be4-ad2949010968\") " Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.260288 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0efe065-02ed-472d-b560-6ddfcee095c4-combined-ca-bundle\") pod \"a0efe065-02ed-472d-b560-6ddfcee095c4\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.260337 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k22j\" (UniqueName: \"kubernetes.io/projected/1d554826-ab8b-40f6-9be4-ad2949010968-kube-api-access-8k22j\") pod \"1d554826-ab8b-40f6-9be4-ad2949010968\" (UID: \"1d554826-ab8b-40f6-9be4-ad2949010968\") " Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.260416 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0efe065-02ed-472d-b560-6ddfcee095c4-logs\") pod \"a0efe065-02ed-472d-b560-6ddfcee095c4\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.260453 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1d554826-ab8b-40f6-9be4-ad2949010968-db-sync-config-data\") pod \"1d554826-ab8b-40f6-9be4-ad2949010968\" (UID: \"1d554826-ab8b-40f6-9be4-ad2949010968\") " Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.260475 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm49k\" (UniqueName: \"kubernetes.io/projected/a0efe065-02ed-472d-b560-6ddfcee095c4-kube-api-access-cm49k\") pod \"a0efe065-02ed-472d-b560-6ddfcee095c4\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.260530 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0efe065-02ed-472d-b560-6ddfcee095c4-config-data\") pod \"a0efe065-02ed-472d-b560-6ddfcee095c4\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.260583 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0efe065-02ed-472d-b560-6ddfcee095c4-scripts\") pod \"a0efe065-02ed-472d-b560-6ddfcee095c4\" (UID: \"a0efe065-02ed-472d-b560-6ddfcee095c4\") " Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.267210 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0efe065-02ed-472d-b560-6ddfcee095c4-scripts" (OuterVolumeSpecName: "scripts") pod "a0efe065-02ed-472d-b560-6ddfcee095c4" (UID: "a0efe065-02ed-472d-b560-6ddfcee095c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.270352 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0efe065-02ed-472d-b560-6ddfcee095c4-kube-api-access-cm49k" (OuterVolumeSpecName: "kube-api-access-cm49k") pod "a0efe065-02ed-472d-b560-6ddfcee095c4" (UID: "a0efe065-02ed-472d-b560-6ddfcee095c4"). InnerVolumeSpecName "kube-api-access-cm49k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.273427 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0efe065-02ed-472d-b560-6ddfcee095c4-logs" (OuterVolumeSpecName: "logs") pod "a0efe065-02ed-472d-b560-6ddfcee095c4" (UID: "a0efe065-02ed-472d-b560-6ddfcee095c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.273466 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d554826-ab8b-40f6-9be4-ad2949010968-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1d554826-ab8b-40f6-9be4-ad2949010968" (UID: "1d554826-ab8b-40f6-9be4-ad2949010968"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.300145 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d554826-ab8b-40f6-9be4-ad2949010968-kube-api-access-8k22j" (OuterVolumeSpecName: "kube-api-access-8k22j") pod "1d554826-ab8b-40f6-9be4-ad2949010968" (UID: "1d554826-ab8b-40f6-9be4-ad2949010968"). InnerVolumeSpecName "kube-api-access-8k22j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.315991 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0efe065-02ed-472d-b560-6ddfcee095c4-config-data" (OuterVolumeSpecName: "config-data") pod "a0efe065-02ed-472d-b560-6ddfcee095c4" (UID: "a0efe065-02ed-472d-b560-6ddfcee095c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.318220 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0efe065-02ed-472d-b560-6ddfcee095c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0efe065-02ed-472d-b560-6ddfcee095c4" (UID: "a0efe065-02ed-472d-b560-6ddfcee095c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.334401 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d554826-ab8b-40f6-9be4-ad2949010968-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d554826-ab8b-40f6-9be4-ad2949010968" (UID: "1d554826-ab8b-40f6-9be4-ad2949010968"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.363625 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d554826-ab8b-40f6-9be4-ad2949010968-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.363673 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0efe065-02ed-472d-b560-6ddfcee095c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.363683 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k22j\" (UniqueName: \"kubernetes.io/projected/1d554826-ab8b-40f6-9be4-ad2949010968-kube-api-access-8k22j\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.363694 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0efe065-02ed-472d-b560-6ddfcee095c4-logs\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.363704 4624 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1d554826-ab8b-40f6-9be4-ad2949010968-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.363713 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm49k\" (UniqueName: \"kubernetes.io/projected/a0efe065-02ed-472d-b560-6ddfcee095c4-kube-api-access-cm49k\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.363721 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0efe065-02ed-472d-b560-6ddfcee095c4-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.363729 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0efe065-02ed-472d-b560-6ddfcee095c4-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.403882 4624 generic.go:334] "Generic (PLEG): container finished" podID="c7b93b6b-3915-45f1-9f70-b186b5e7ed31" containerID="ae360f24965e6b88c9e5bf73133b74fcc49d6e8fa04ad5e3ccea87515c728cdd" exitCode=0 Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.403944 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tg77p" event={"ID":"c7b93b6b-3915-45f1-9f70-b186b5e7ed31","Type":"ContainerDied","Data":"ae360f24965e6b88c9e5bf73133b74fcc49d6e8fa04ad5e3ccea87515c728cdd"} Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.409988 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lfgwx" event={"ID":"1d554826-ab8b-40f6-9be4-ad2949010968","Type":"ContainerDied","Data":"6f168b7e240d7c4eefff2cc2365586bb1346e5d577a2c6d28c3b0081fb59d47e"} Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.410026 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f168b7e240d7c4eefff2cc2365586bb1346e5d577a2c6d28c3b0081fb59d47e" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.410111 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lfgwx" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.427339 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xkc5w" event={"ID":"a0efe065-02ed-472d-b560-6ddfcee095c4","Type":"ContainerDied","Data":"f9f30e7db3be58da8a3f6e995935abbea6ed9c2fd5bdba25cf40d9842f2897f7"} Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.427563 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9f30e7db3be58da8a3f6e995935abbea6ed9c2fd5bdba25cf40d9842f2897f7" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.427858 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xkc5w" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.611620 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.611728 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.647069 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-769b9cd88f-r425v"] Oct 08 14:41:44 crc kubenswrapper[4624]: E1008 14:41:44.647441 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d554826-ab8b-40f6-9be4-ad2949010968" containerName="barbican-db-sync" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.647458 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d554826-ab8b-40f6-9be4-ad2949010968" containerName="barbican-db-sync" Oct 08 14:41:44 crc kubenswrapper[4624]: E1008 14:41:44.647482 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0efe065-02ed-472d-b560-6ddfcee095c4" containerName="placement-db-sync" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.647489 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0efe065-02ed-472d-b560-6ddfcee095c4" containerName="placement-db-sync" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.647661 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0efe065-02ed-472d-b560-6ddfcee095c4" containerName="placement-db-sync" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.647683 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d554826-ab8b-40f6-9be4-ad2949010968" containerName="barbican-db-sync" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.648572 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-769b9cd88f-r425v" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.651999 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.653457 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-kx29b" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.656225 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.681559 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-769b9cd88f-r425v"] Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.719571 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.720109 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.768696 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-db88bb74-6vl6r"] Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.770153 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.772490 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2051dd96-3f4a-42b1-9802-602bd9693aec-config-data\") pod \"barbican-worker-769b9cd88f-r425v\" (UID: \"2051dd96-3f4a-42b1-9802-602bd9693aec\") " pod="openstack/barbican-worker-769b9cd88f-r425v" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.775460 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2051dd96-3f4a-42b1-9802-602bd9693aec-config-data-custom\") pod \"barbican-worker-769b9cd88f-r425v\" (UID: \"2051dd96-3f4a-42b1-9802-602bd9693aec\") " pod="openstack/barbican-worker-769b9cd88f-r425v" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.775585 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2051dd96-3f4a-42b1-9802-602bd9693aec-combined-ca-bundle\") pod \"barbican-worker-769b9cd88f-r425v\" (UID: \"2051dd96-3f4a-42b1-9802-602bd9693aec\") " pod="openstack/barbican-worker-769b9cd88f-r425v" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.775789 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zdbh\" (UniqueName: \"kubernetes.io/projected/2051dd96-3f4a-42b1-9802-602bd9693aec-kube-api-access-4zdbh\") pod \"barbican-worker-769b9cd88f-r425v\" (UID: \"2051dd96-3f4a-42b1-9802-602bd9693aec\") " pod="openstack/barbican-worker-769b9cd88f-r425v" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.775811 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2051dd96-3f4a-42b1-9802-602bd9693aec-logs\") pod \"barbican-worker-769b9cd88f-r425v\" (UID: \"2051dd96-3f4a-42b1-9802-602bd9693aec\") " pod="openstack/barbican-worker-769b9cd88f-r425v" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.781977 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.806776 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-db88bb74-6vl6r"] Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.864859 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85bddccb77-p452h"] Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.878144 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255b203d-1921-40ab-8c4f-7f582a647651-config-data\") pod \"barbican-keystone-listener-db88bb74-6vl6r\" (UID: \"255b203d-1921-40ab-8c4f-7f582a647651\") " pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.878246 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zdbh\" (UniqueName: \"kubernetes.io/projected/2051dd96-3f4a-42b1-9802-602bd9693aec-kube-api-access-4zdbh\") pod \"barbican-worker-769b9cd88f-r425v\" (UID: \"2051dd96-3f4a-42b1-9802-602bd9693aec\") " pod="openstack/barbican-worker-769b9cd88f-r425v" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.878270 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nkf5\" (UniqueName: \"kubernetes.io/projected/255b203d-1921-40ab-8c4f-7f582a647651-kube-api-access-5nkf5\") pod \"barbican-keystone-listener-db88bb74-6vl6r\" (UID: \"255b203d-1921-40ab-8c4f-7f582a647651\") " pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.878291 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2051dd96-3f4a-42b1-9802-602bd9693aec-logs\") pod \"barbican-worker-769b9cd88f-r425v\" (UID: \"2051dd96-3f4a-42b1-9802-602bd9693aec\") " pod="openstack/barbican-worker-769b9cd88f-r425v" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.878361 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/255b203d-1921-40ab-8c4f-7f582a647651-config-data-custom\") pod \"barbican-keystone-listener-db88bb74-6vl6r\" (UID: \"255b203d-1921-40ab-8c4f-7f582a647651\") " pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.878412 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2051dd96-3f4a-42b1-9802-602bd9693aec-config-data\") pod \"barbican-worker-769b9cd88f-r425v\" (UID: \"2051dd96-3f4a-42b1-9802-602bd9693aec\") " pod="openstack/barbican-worker-769b9cd88f-r425v" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.878488 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2051dd96-3f4a-42b1-9802-602bd9693aec-config-data-custom\") pod \"barbican-worker-769b9cd88f-r425v\" (UID: \"2051dd96-3f4a-42b1-9802-602bd9693aec\") " pod="openstack/barbican-worker-769b9cd88f-r425v" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.878522 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/255b203d-1921-40ab-8c4f-7f582a647651-logs\") pod \"barbican-keystone-listener-db88bb74-6vl6r\" (UID: \"255b203d-1921-40ab-8c4f-7f582a647651\") " pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.878563 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2051dd96-3f4a-42b1-9802-602bd9693aec-combined-ca-bundle\") pod \"barbican-worker-769b9cd88f-r425v\" (UID: \"2051dd96-3f4a-42b1-9802-602bd9693aec\") " pod="openstack/barbican-worker-769b9cd88f-r425v" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.878619 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255b203d-1921-40ab-8c4f-7f582a647651-combined-ca-bundle\") pod \"barbican-keystone-listener-db88bb74-6vl6r\" (UID: \"255b203d-1921-40ab-8c4f-7f582a647651\") " pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.880333 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2051dd96-3f4a-42b1-9802-602bd9693aec-logs\") pod \"barbican-worker-769b9cd88f-r425v\" (UID: \"2051dd96-3f4a-42b1-9802-602bd9693aec\") " pod="openstack/barbican-worker-769b9cd88f-r425v" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.887730 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2051dd96-3f4a-42b1-9802-602bd9693aec-config-data-custom\") pod \"barbican-worker-769b9cd88f-r425v\" (UID: \"2051dd96-3f4a-42b1-9802-602bd9693aec\") " pod="openstack/barbican-worker-769b9cd88f-r425v" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.888172 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2051dd96-3f4a-42b1-9802-602bd9693aec-config-data\") pod \"barbican-worker-769b9cd88f-r425v\" (UID: \"2051dd96-3f4a-42b1-9802-602bd9693aec\") " pod="openstack/barbican-worker-769b9cd88f-r425v" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.888809 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2051dd96-3f4a-42b1-9802-602bd9693aec-combined-ca-bundle\") pod \"barbican-worker-769b9cd88f-r425v\" (UID: \"2051dd96-3f4a-42b1-9802-602bd9693aec\") " pod="openstack/barbican-worker-769b9cd88f-r425v" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.908167 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-856dcb88f7-rh7ns"] Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.909726 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.928581 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zdbh\" (UniqueName: \"kubernetes.io/projected/2051dd96-3f4a-42b1-9802-602bd9693aec-kube-api-access-4zdbh\") pod \"barbican-worker-769b9cd88f-r425v\" (UID: \"2051dd96-3f4a-42b1-9802-602bd9693aec\") " pod="openstack/barbican-worker-769b9cd88f-r425v" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.960862 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7bd5b645c6-bpx66"] Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.962615 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.967525 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.981190 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/255b203d-1921-40ab-8c4f-7f582a647651-config-data-custom\") pod \"barbican-keystone-listener-db88bb74-6vl6r\" (UID: \"255b203d-1921-40ab-8c4f-7f582a647651\") " pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.981270 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/255b203d-1921-40ab-8c4f-7f582a647651-logs\") pod \"barbican-keystone-listener-db88bb74-6vl6r\" (UID: \"255b203d-1921-40ab-8c4f-7f582a647651\") " pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.981320 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255b203d-1921-40ab-8c4f-7f582a647651-combined-ca-bundle\") pod \"barbican-keystone-listener-db88bb74-6vl6r\" (UID: \"255b203d-1921-40ab-8c4f-7f582a647651\") " pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.981348 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255b203d-1921-40ab-8c4f-7f582a647651-config-data\") pod \"barbican-keystone-listener-db88bb74-6vl6r\" (UID: \"255b203d-1921-40ab-8c4f-7f582a647651\") " pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.981376 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nkf5\" (UniqueName: \"kubernetes.io/projected/255b203d-1921-40ab-8c4f-7f582a647651-kube-api-access-5nkf5\") pod \"barbican-keystone-listener-db88bb74-6vl6r\" (UID: \"255b203d-1921-40ab-8c4f-7f582a647651\") " pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.985556 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/255b203d-1921-40ab-8c4f-7f582a647651-config-data-custom\") pod \"barbican-keystone-listener-db88bb74-6vl6r\" (UID: \"255b203d-1921-40ab-8c4f-7f582a647651\") " pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.985812 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/255b203d-1921-40ab-8c4f-7f582a647651-logs\") pod \"barbican-keystone-listener-db88bb74-6vl6r\" (UID: \"255b203d-1921-40ab-8c4f-7f582a647651\") " pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.990440 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255b203d-1921-40ab-8c4f-7f582a647651-combined-ca-bundle\") pod \"barbican-keystone-listener-db88bb74-6vl6r\" (UID: \"255b203d-1921-40ab-8c4f-7f582a647651\") " pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" Oct 08 14:41:44 crc kubenswrapper[4624]: I1008 14:41:44.999694 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255b203d-1921-40ab-8c4f-7f582a647651-config-data\") pod \"barbican-keystone-listener-db88bb74-6vl6r\" (UID: \"255b203d-1921-40ab-8c4f-7f582a647651\") " pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.011284 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nkf5\" (UniqueName: \"kubernetes.io/projected/255b203d-1921-40ab-8c4f-7f582a647651-kube-api-access-5nkf5\") pod \"barbican-keystone-listener-db88bb74-6vl6r\" (UID: \"255b203d-1921-40ab-8c4f-7f582a647651\") " pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.012962 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-856dcb88f7-rh7ns"] Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.016760 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-769b9cd88f-r425v" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.045994 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bd5b645c6-bpx66"] Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.082456 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-ovsdbserver-sb\") pod \"dnsmasq-dns-856dcb88f7-rh7ns\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.082516 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a6e279-4f8e-4293-868f-018f75fbba17-combined-ca-bundle\") pod \"barbican-api-7bd5b645c6-bpx66\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.082573 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxzgc\" (UniqueName: \"kubernetes.io/projected/c5a6e279-4f8e-4293-868f-018f75fbba17-kube-api-access-fxzgc\") pod \"barbican-api-7bd5b645c6-bpx66\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.082595 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-ovsdbserver-nb\") pod \"dnsmasq-dns-856dcb88f7-rh7ns\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.082618 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-config\") pod \"dnsmasq-dns-856dcb88f7-rh7ns\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.082657 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhxk6\" (UniqueName: \"kubernetes.io/projected/9d94066f-cfff-4398-8d02-b47b7ed819ac-kube-api-access-dhxk6\") pod \"dnsmasq-dns-856dcb88f7-rh7ns\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.082728 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-dns-svc\") pod \"dnsmasq-dns-856dcb88f7-rh7ns\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.082755 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5a6e279-4f8e-4293-868f-018f75fbba17-logs\") pod \"barbican-api-7bd5b645c6-bpx66\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.082781 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a6e279-4f8e-4293-868f-018f75fbba17-config-data\") pod \"barbican-api-7bd5b645c6-bpx66\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.082803 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-dns-swift-storage-0\") pod \"dnsmasq-dns-856dcb88f7-rh7ns\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.082833 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5a6e279-4f8e-4293-868f-018f75fbba17-config-data-custom\") pod \"barbican-api-7bd5b645c6-bpx66\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.108399 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.184700 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxzgc\" (UniqueName: \"kubernetes.io/projected/c5a6e279-4f8e-4293-868f-018f75fbba17-kube-api-access-fxzgc\") pod \"barbican-api-7bd5b645c6-bpx66\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.185956 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-ovsdbserver-nb\") pod \"dnsmasq-dns-856dcb88f7-rh7ns\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.186104 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-config\") pod \"dnsmasq-dns-856dcb88f7-rh7ns\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.186216 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhxk6\" (UniqueName: \"kubernetes.io/projected/9d94066f-cfff-4398-8d02-b47b7ed819ac-kube-api-access-dhxk6\") pod \"dnsmasq-dns-856dcb88f7-rh7ns\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.186378 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-dns-svc\") pod \"dnsmasq-dns-856dcb88f7-rh7ns\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.186486 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5a6e279-4f8e-4293-868f-018f75fbba17-logs\") pod \"barbican-api-7bd5b645c6-bpx66\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.186599 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a6e279-4f8e-4293-868f-018f75fbba17-config-data\") pod \"barbican-api-7bd5b645c6-bpx66\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.186755 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-dns-swift-storage-0\") pod \"dnsmasq-dns-856dcb88f7-rh7ns\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.186911 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5a6e279-4f8e-4293-868f-018f75fbba17-config-data-custom\") pod \"barbican-api-7bd5b645c6-bpx66\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.187027 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-ovsdbserver-sb\") pod \"dnsmasq-dns-856dcb88f7-rh7ns\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.187115 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-ovsdbserver-nb\") pod \"dnsmasq-dns-856dcb88f7-rh7ns\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.187220 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a6e279-4f8e-4293-868f-018f75fbba17-combined-ca-bundle\") pod \"barbican-api-7bd5b645c6-bpx66\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.187459 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5a6e279-4f8e-4293-868f-018f75fbba17-logs\") pod \"barbican-api-7bd5b645c6-bpx66\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.188431 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-dns-swift-storage-0\") pod \"dnsmasq-dns-856dcb88f7-rh7ns\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.188444 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-dns-svc\") pod \"dnsmasq-dns-856dcb88f7-rh7ns\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.188831 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-ovsdbserver-sb\") pod \"dnsmasq-dns-856dcb88f7-rh7ns\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.189457 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-config\") pod \"dnsmasq-dns-856dcb88f7-rh7ns\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.193543 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a6e279-4f8e-4293-868f-018f75fbba17-combined-ca-bundle\") pod \"barbican-api-7bd5b645c6-bpx66\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.196119 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a6e279-4f8e-4293-868f-018f75fbba17-config-data\") pod \"barbican-api-7bd5b645c6-bpx66\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.201442 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5a6e279-4f8e-4293-868f-018f75fbba17-config-data-custom\") pod \"barbican-api-7bd5b645c6-bpx66\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.216234 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxzgc\" (UniqueName: \"kubernetes.io/projected/c5a6e279-4f8e-4293-868f-018f75fbba17-kube-api-access-fxzgc\") pod \"barbican-api-7bd5b645c6-bpx66\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.232161 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhxk6\" (UniqueName: \"kubernetes.io/projected/9d94066f-cfff-4398-8d02-b47b7ed819ac-kube-api-access-dhxk6\") pod \"dnsmasq-dns-856dcb88f7-rh7ns\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.253598 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.321152 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.363515 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7f4595c6f6-t4nns"] Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.364879 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.366616 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.366882 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.367797 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.368113 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.368517 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-5g2km" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.393479 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f4595c6f6-t4nns"] Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.494410 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-public-tls-certs\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.494495 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-internal-tls-certs\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.494523 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7txb\" (UniqueName: \"kubernetes.io/projected/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-kube-api-access-m7txb\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.494556 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-config-data\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.494581 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-scripts\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.494600 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-combined-ca-bundle\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.494620 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-logs\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.504169 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7bb5f9bf4f-nll8n"] Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.505432 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.507906 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.512808 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.543117 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bb5f9bf4f-nll8n"] Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.606184 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-internal-tls-certs\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.606291 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-httpd-config\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.606321 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-public-tls-certs\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.606343 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-ovndb-tls-certs\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.606492 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtw8c\" (UniqueName: \"kubernetes.io/projected/a8614612-c6f9-452f-a9b7-a47bce32ac81-kube-api-access-mtw8c\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.606568 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-internal-tls-certs\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.606662 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-config\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.606693 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7txb\" (UniqueName: \"kubernetes.io/projected/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-kube-api-access-m7txb\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.606770 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-config-data\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.606842 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-scripts\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.606875 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-public-tls-certs\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.606922 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-combined-ca-bundle\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.606949 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-logs\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.607052 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-combined-ca-bundle\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.617990 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-logs\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.632490 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-internal-tls-certs\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.636188 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-scripts\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.636775 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-public-tls-certs\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.640838 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-config-data\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.646263 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-combined-ca-bundle\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.679516 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7txb\" (UniqueName: \"kubernetes.io/projected/cfd45b9a-e3a8-4f53-92bb-4e4dc0580365-kube-api-access-m7txb\") pod \"placement-7f4595c6f6-t4nns\" (UID: \"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365\") " pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.688874 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.708318 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-combined-ca-bundle\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.708384 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-internal-tls-certs\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.708414 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-httpd-config\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.708433 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-ovndb-tls-certs\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.708484 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtw8c\" (UniqueName: \"kubernetes.io/projected/a8614612-c6f9-452f-a9b7-a47bce32ac81-kube-api-access-mtw8c\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.708512 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-config\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.708548 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-public-tls-certs\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.732544 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-combined-ca-bundle\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.732546 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-ovndb-tls-certs\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.735989 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-public-tls-certs\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.740278 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-httpd-config\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.742821 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-internal-tls-certs\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.745438 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-config\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.765612 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtw8c\" (UniqueName: \"kubernetes.io/projected/a8614612-c6f9-452f-a9b7-a47bce32ac81-kube-api-access-mtw8c\") pod \"neutron-7bb5f9bf4f-nll8n\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:45 crc kubenswrapper[4624]: I1008 14:41:45.836130 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.683475 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.771938 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4vd4\" (UniqueName: \"kubernetes.io/projected/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-kube-api-access-w4vd4\") pod \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.772013 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-fernet-keys\") pod \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.772101 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-scripts\") pod \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.772161 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-credential-keys\") pod \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.772223 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-config-data\") pod \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.772314 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-combined-ca-bundle\") pod \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\" (UID: \"c7b93b6b-3915-45f1-9f70-b186b5e7ed31\") " Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.780981 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-scripts" (OuterVolumeSpecName: "scripts") pod "c7b93b6b-3915-45f1-9f70-b186b5e7ed31" (UID: "c7b93b6b-3915-45f1-9f70-b186b5e7ed31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.786841 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c7b93b6b-3915-45f1-9f70-b186b5e7ed31" (UID: "c7b93b6b-3915-45f1-9f70-b186b5e7ed31"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.786858 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-kube-api-access-w4vd4" (OuterVolumeSpecName: "kube-api-access-w4vd4") pod "c7b93b6b-3915-45f1-9f70-b186b5e7ed31" (UID: "c7b93b6b-3915-45f1-9f70-b186b5e7ed31"). InnerVolumeSpecName "kube-api-access-w4vd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.797701 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c7b93b6b-3915-45f1-9f70-b186b5e7ed31" (UID: "c7b93b6b-3915-45f1-9f70-b186b5e7ed31"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.827768 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-config-data" (OuterVolumeSpecName: "config-data") pod "c7b93b6b-3915-45f1-9f70-b186b5e7ed31" (UID: "c7b93b6b-3915-45f1-9f70-b186b5e7ed31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.840223 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7b93b6b-3915-45f1-9f70-b186b5e7ed31" (UID: "c7b93b6b-3915-45f1-9f70-b186b5e7ed31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.875980 4624 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.876012 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.876022 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.876041 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4vd4\" (UniqueName: \"kubernetes.io/projected/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-kube-api-access-w4vd4\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.876051 4624 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:48 crc kubenswrapper[4624]: I1008 14:41:48.876060 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b93b6b-3915-45f1-9f70-b186b5e7ed31-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:49 crc kubenswrapper[4624]: I1008 14:41:49.434287 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85bddccb77-p452h"] Oct 08 14:41:49 crc kubenswrapper[4624]: W1008 14:41:49.466990 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod622ca141_49de_4415_a26b_8ea7ddd47209.slice/crio-03d7a23cadab62cdf73353e33c2f6f20884b4b926faab940845bd05ae618b0e0 WatchSource:0}: Error finding container 03d7a23cadab62cdf73353e33c2f6f20884b4b926faab940845bd05ae618b0e0: Status 404 returned error can't find the container with id 03d7a23cadab62cdf73353e33c2f6f20884b4b926faab940845bd05ae618b0e0 Oct 08 14:41:49 crc kubenswrapper[4624]: I1008 14:41:49.625220 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bddccb77-p452h" event={"ID":"622ca141-49de-4415-a26b-8ea7ddd47209","Type":"ContainerStarted","Data":"03d7a23cadab62cdf73353e33c2f6f20884b4b926faab940845bd05ae618b0e0"} Oct 08 14:41:49 crc kubenswrapper[4624]: I1008 14:41:49.727165 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tg77p" event={"ID":"c7b93b6b-3915-45f1-9f70-b186b5e7ed31","Type":"ContainerDied","Data":"c754b9d5d35a0485d2caaa304b14a722a2e8f017c36481774ca9917ae1a4d06d"} Oct 08 14:41:49 crc kubenswrapper[4624]: I1008 14:41:49.727218 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c754b9d5d35a0485d2caaa304b14a722a2e8f017c36481774ca9917ae1a4d06d" Oct 08 14:41:49 crc kubenswrapper[4624]: I1008 14:41:49.727336 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tg77p" Oct 08 14:41:49 crc kubenswrapper[4624]: I1008 14:41:49.843821 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bd5b645c6-bpx66"] Oct 08 14:41:49 crc kubenswrapper[4624]: I1008 14:41:49.896907 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-66c67d8-rjnw7"] Oct 08 14:41:49 crc kubenswrapper[4624]: E1008 14:41:49.898219 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b93b6b-3915-45f1-9f70-b186b5e7ed31" containerName="keystone-bootstrap" Oct 08 14:41:49 crc kubenswrapper[4624]: I1008 14:41:49.898242 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b93b6b-3915-45f1-9f70-b186b5e7ed31" containerName="keystone-bootstrap" Oct 08 14:41:49 crc kubenswrapper[4624]: I1008 14:41:49.899919 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b93b6b-3915-45f1-9f70-b186b5e7ed31" containerName="keystone-bootstrap" Oct 08 14:41:49 crc kubenswrapper[4624]: I1008 14:41:49.921303 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:49 crc kubenswrapper[4624]: I1008 14:41:49.936083 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 08 14:41:49 crc kubenswrapper[4624]: I1008 14:41:49.936258 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 08 14:41:49 crc kubenswrapper[4624]: I1008 14:41:49.961999 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66c67d8-rjnw7"] Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.003733 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-65f89d8d74-ng4cv"] Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.004989 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.012700 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.012967 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.013113 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.013352 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.013919 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r9xgf" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.014305 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.027471 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtv4m\" (UniqueName: \"kubernetes.io/projected/12adc423-1b55-4b56-85a3-32e2aabbc82d-kube-api-access-qtv4m\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.027520 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12adc423-1b55-4b56-85a3-32e2aabbc82d-config-data\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.027558 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12adc423-1b55-4b56-85a3-32e2aabbc82d-config-data-custom\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.027616 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12adc423-1b55-4b56-85a3-32e2aabbc82d-logs\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.027738 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12adc423-1b55-4b56-85a3-32e2aabbc82d-public-tls-certs\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.027784 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12adc423-1b55-4b56-85a3-32e2aabbc82d-internal-tls-certs\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.027914 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12adc423-1b55-4b56-85a3-32e2aabbc82d-combined-ca-bundle\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.046474 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-65f89d8d74-ng4cv"] Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.113616 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-856dcb88f7-rh7ns"] Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.130096 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-fernet-keys\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.130169 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-internal-tls-certs\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.130216 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12adc423-1b55-4b56-85a3-32e2aabbc82d-public-tls-certs\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.130256 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12adc423-1b55-4b56-85a3-32e2aabbc82d-internal-tls-certs\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.130280 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-combined-ca-bundle\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.130306 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-credential-keys\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.130352 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-public-tls-certs\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.130380 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12adc423-1b55-4b56-85a3-32e2aabbc82d-combined-ca-bundle\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.130463 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-config-data\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.130492 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtv4m\" (UniqueName: \"kubernetes.io/projected/12adc423-1b55-4b56-85a3-32e2aabbc82d-kube-api-access-qtv4m\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.130519 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12adc423-1b55-4b56-85a3-32e2aabbc82d-config-data\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.130550 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plwrs\" (UniqueName: \"kubernetes.io/projected/8c703134-b38a-414e-8c09-5702aa32a638-kube-api-access-plwrs\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.130575 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12adc423-1b55-4b56-85a3-32e2aabbc82d-config-data-custom\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.130607 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-scripts\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.130669 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12adc423-1b55-4b56-85a3-32e2aabbc82d-logs\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.131174 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12adc423-1b55-4b56-85a3-32e2aabbc82d-logs\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.189035 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12adc423-1b55-4b56-85a3-32e2aabbc82d-config-data-custom\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.227858 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12adc423-1b55-4b56-85a3-32e2aabbc82d-public-tls-certs\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.228552 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12adc423-1b55-4b56-85a3-32e2aabbc82d-config-data\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.229961 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12adc423-1b55-4b56-85a3-32e2aabbc82d-combined-ca-bundle\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.231391 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12adc423-1b55-4b56-85a3-32e2aabbc82d-internal-tls-certs\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.232230 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-combined-ca-bundle\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.232352 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-credential-keys\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.232392 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-public-tls-certs\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.232456 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-config-data\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.232491 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plwrs\" (UniqueName: \"kubernetes.io/projected/8c703134-b38a-414e-8c09-5702aa32a638-kube-api-access-plwrs\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.232516 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-scripts\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.232554 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-fernet-keys\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.232585 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-internal-tls-certs\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.238286 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtv4m\" (UniqueName: \"kubernetes.io/projected/12adc423-1b55-4b56-85a3-32e2aabbc82d-kube-api-access-qtv4m\") pod \"barbican-api-66c67d8-rjnw7\" (UID: \"12adc423-1b55-4b56-85a3-32e2aabbc82d\") " pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.247175 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-internal-tls-certs\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.253821 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-config-data\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.259047 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-public-tls-certs\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.270018 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-fernet-keys\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.273736 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-credential-keys\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.274005 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.282007 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-scripts\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.284365 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c703134-b38a-414e-8c09-5702aa32a638-combined-ca-bundle\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.284886 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plwrs\" (UniqueName: \"kubernetes.io/projected/8c703134-b38a-414e-8c09-5702aa32a638-kube-api-access-plwrs\") pod \"keystone-65f89d8d74-ng4cv\" (UID: \"8c703134-b38a-414e-8c09-5702aa32a638\") " pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.325690 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.328130 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-db88bb74-6vl6r"] Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.390355 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bb5f9bf4f-nll8n"] Oct 08 14:41:50 crc kubenswrapper[4624]: W1008 14:41:50.542480 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8614612_c6f9_452f_a9b7_a47bce32ac81.slice/crio-74481dbc2dc69fb1c27a81597ea1faf84dc0d0a987874115e56172cc7348077d WatchSource:0}: Error finding container 74481dbc2dc69fb1c27a81597ea1faf84dc0d0a987874115e56172cc7348077d: Status 404 returned error can't find the container with id 74481dbc2dc69fb1c27a81597ea1faf84dc0d0a987874115e56172cc7348077d Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.566606 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f4595c6f6-t4nns"] Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.674826 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-769b9cd88f-r425v"] Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.786834 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f4595c6f6-t4nns" event={"ID":"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365","Type":"ContainerStarted","Data":"3ec01a9c6fb3e84dc440acd45fefbc46878779e766e850daa8c4f0e57f4aa255"} Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.792145 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86d56d5f7b-2jwqz"] Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.837426 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf057e0-e090-498a-bb8b-32835373555c","Type":"ContainerStarted","Data":"fbbfffa5d6c5d9242bd7677594dabbe544f2224ed6ed82d41105fdb2e5e97485"} Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.850268 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5b645c6-bpx66" event={"ID":"c5a6e279-4f8e-4293-868f-018f75fbba17","Type":"ContainerStarted","Data":"b2b03e4ca5636731594d53ed44c58c2b6c3409d33665f3b50996850375e5cc53"} Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.913374 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bb5f9bf4f-nll8n" event={"ID":"a8614612-c6f9-452f-a9b7-a47bce32ac81","Type":"ContainerStarted","Data":"74481dbc2dc69fb1c27a81597ea1faf84dc0d0a987874115e56172cc7348077d"} Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.938337 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" event={"ID":"255b203d-1921-40ab-8c4f-7f582a647651","Type":"ContainerStarted","Data":"71572fee28e2bdbb310fa645303d476e12a92a83aa373ea3abde76530ba4b2ac"} Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.974279 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" event={"ID":"9d94066f-cfff-4398-8d02-b47b7ed819ac","Type":"ContainerStarted","Data":"9d70c422ea87ba6319d814b9ae489fa69611e57aa36710d3811114e2b2806fbe"} Oct 08 14:41:50 crc kubenswrapper[4624]: I1008 14:41:50.975728 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-f6sz5" event={"ID":"265c0058-98f4-4bcd-b413-8e3633ab56cd","Type":"ContainerStarted","Data":"c8dc6fa2e321c8d676e138cd68e8b4c3dc94c62c7e7783228df72b3f3ed310bf"} Oct 08 14:41:51 crc kubenswrapper[4624]: I1008 14:41:51.006989 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-f6sz5" podStartSLOduration=3.874980722 podStartE2EDuration="1m17.006970895s" podCreationTimestamp="2025-10-08 14:40:34 +0000 UTC" firstStartedPulling="2025-10-08 14:40:35.935397009 +0000 UTC m=+1061.086332086" lastFinishedPulling="2025-10-08 14:41:49.067387182 +0000 UTC m=+1134.218322259" observedRunningTime="2025-10-08 14:41:50.996363057 +0000 UTC m=+1136.147298134" watchObservedRunningTime="2025-10-08 14:41:51.006970895 +0000 UTC m=+1136.157905972" Oct 08 14:41:51 crc kubenswrapper[4624]: E1008 14:41:51.582451 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod622ca141_49de_4415_a26b_8ea7ddd47209.slice/crio-ee320ab72d0a6e10d78d25d305e1b85e868ecfd9a757dc22f1a1989c9470c779.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod622ca141_49de_4415_a26b_8ea7ddd47209.slice/crio-conmon-ee320ab72d0a6e10d78d25d305e1b85e868ecfd9a757dc22f1a1989c9470c779.scope\": RecentStats: unable to find data in memory cache]" Oct 08 14:41:51 crc kubenswrapper[4624]: I1008 14:41:51.692007 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66c67d8-rjnw7"] Oct 08 14:41:51 crc kubenswrapper[4624]: W1008 14:41:51.771592 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12adc423_1b55_4b56_85a3_32e2aabbc82d.slice/crio-72a6b473c6d151a58d7ea4afe0f2566eb8cba7e5bae47eef83185dc580f1c2f3 WatchSource:0}: Error finding container 72a6b473c6d151a58d7ea4afe0f2566eb8cba7e5bae47eef83185dc580f1c2f3: Status 404 returned error can't find the container with id 72a6b473c6d151a58d7ea4afe0f2566eb8cba7e5bae47eef83185dc580f1c2f3 Oct 08 14:41:51 crc kubenswrapper[4624]: I1008 14:41:51.837591 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-65f89d8d74-ng4cv"] Oct 08 14:41:52 crc kubenswrapper[4624]: I1008 14:41:52.034998 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bb5f9bf4f-nll8n" event={"ID":"a8614612-c6f9-452f-a9b7-a47bce32ac81","Type":"ContainerStarted","Data":"578bf5df14c17d5ac2f6597c76c5f129fc5c7e23bee9b01130e478e8bfa328c3"} Oct 08 14:41:52 crc kubenswrapper[4624]: I1008 14:41:52.037262 4624 generic.go:334] "Generic (PLEG): container finished" podID="9d94066f-cfff-4398-8d02-b47b7ed819ac" containerID="71833c0660401888dfb263ff736e76c19ab57f55fcedf4a8b775d102ca10df75" exitCode=0 Oct 08 14:41:52 crc kubenswrapper[4624]: I1008 14:41:52.037327 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" event={"ID":"9d94066f-cfff-4398-8d02-b47b7ed819ac","Type":"ContainerDied","Data":"71833c0660401888dfb263ff736e76c19ab57f55fcedf4a8b775d102ca10df75"} Oct 08 14:41:52 crc kubenswrapper[4624]: I1008 14:41:52.105236 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66c67d8-rjnw7" event={"ID":"12adc423-1b55-4b56-85a3-32e2aabbc82d","Type":"ContainerStarted","Data":"72a6b473c6d151a58d7ea4afe0f2566eb8cba7e5bae47eef83185dc580f1c2f3"} Oct 08 14:41:52 crc kubenswrapper[4624]: I1008 14:41:52.120055 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f4595c6f6-t4nns" event={"ID":"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365","Type":"ContainerStarted","Data":"24449b9ca688d88411cdb37c111a1675e2ff5562c45464687baf24617d3ef9a1"} Oct 08 14:41:52 crc kubenswrapper[4624]: I1008 14:41:52.132151 4624 generic.go:334] "Generic (PLEG): container finished" podID="622ca141-49de-4415-a26b-8ea7ddd47209" containerID="ee320ab72d0a6e10d78d25d305e1b85e868ecfd9a757dc22f1a1989c9470c779" exitCode=0 Oct 08 14:41:52 crc kubenswrapper[4624]: I1008 14:41:52.132217 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bddccb77-p452h" event={"ID":"622ca141-49de-4415-a26b-8ea7ddd47209","Type":"ContainerDied","Data":"ee320ab72d0a6e10d78d25d305e1b85e868ecfd9a757dc22f1a1989c9470c779"} Oct 08 14:41:52 crc kubenswrapper[4624]: I1008 14:41:52.154673 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-769b9cd88f-r425v" event={"ID":"2051dd96-3f4a-42b1-9802-602bd9693aec","Type":"ContainerStarted","Data":"588e9747756e6dd818562d14b7995c7ebfabf32565e75254abd4ab882b2a2c2b"} Oct 08 14:41:52 crc kubenswrapper[4624]: I1008 14:41:52.204860 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86d56d5f7b-2jwqz" event={"ID":"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76","Type":"ContainerStarted","Data":"53706d2bf9975b256367a481ba8656ad7a17b93df91d9501be276ee69712fb95"} Oct 08 14:41:52 crc kubenswrapper[4624]: I1008 14:41:52.205446 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86d56d5f7b-2jwqz" event={"ID":"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76","Type":"ContainerStarted","Data":"c10b50a50f4c642604eedcae7cebe23d333f20ab10d2d6c5c5bdcd9ffb9fc3a4"} Oct 08 14:41:52 crc kubenswrapper[4624]: I1008 14:41:52.229766 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-65f89d8d74-ng4cv" event={"ID":"8c703134-b38a-414e-8c09-5702aa32a638","Type":"ContainerStarted","Data":"368ba13e72eb7d12f4a1d8ebdcaa211132697eeb4be04a48cc92caa9be8f5a4c"} Oct 08 14:41:52 crc kubenswrapper[4624]: I1008 14:41:52.254683 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5b645c6-bpx66" event={"ID":"c5a6e279-4f8e-4293-868f-018f75fbba17","Type":"ContainerStarted","Data":"9b77812b6fda45db0ff78525807b76ab969945db63ffc6443e7190549589bf18"} Oct 08 14:41:52 crc kubenswrapper[4624]: I1008 14:41:52.254846 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:52 crc kubenswrapper[4624]: I1008 14:41:52.254862 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5b645c6-bpx66" event={"ID":"c5a6e279-4f8e-4293-868f-018f75fbba17","Type":"ContainerStarted","Data":"d8647e0577baccb8fe28220110e4b0e8182ead94221e07abed0ed82362e36795"} Oct 08 14:41:52 crc kubenswrapper[4624]: I1008 14:41:52.254875 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:41:52 crc kubenswrapper[4624]: I1008 14:41:52.284085 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7bd5b645c6-bpx66" podStartSLOduration=8.284069256 podStartE2EDuration="8.284069256s" podCreationTimestamp="2025-10-08 14:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:41:52.283092652 +0000 UTC m=+1137.434027729" watchObservedRunningTime="2025-10-08 14:41:52.284069256 +0000 UTC m=+1137.435004333" Oct 08 14:41:52 crc kubenswrapper[4624]: I1008 14:41:52.297988 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hv6dc" event={"ID":"cd53e103-7b25-4f61-a0f4-675ace133ab7","Type":"ContainerStarted","Data":"bfa5c132440552840d9c1f0caba4afb6694790cb6c8ae77b3891993bf5eaadd1"} Oct 08 14:41:52 crc kubenswrapper[4624]: I1008 14:41:52.325015 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-hv6dc" podStartSLOduration=5.183064636 podStartE2EDuration="1m18.32499314s" podCreationTimestamp="2025-10-08 14:40:34 +0000 UTC" firstStartedPulling="2025-10-08 14:40:35.935803539 +0000 UTC m=+1061.086738616" lastFinishedPulling="2025-10-08 14:41:49.077732043 +0000 UTC m=+1134.228667120" observedRunningTime="2025-10-08 14:41:52.323607505 +0000 UTC m=+1137.474542582" watchObservedRunningTime="2025-10-08 14:41:52.32499314 +0000 UTC m=+1137.475928217" Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.169406 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.250553 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-config\") pod \"622ca141-49de-4415-a26b-8ea7ddd47209\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.256013 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-ovsdbserver-sb\") pod \"622ca141-49de-4415-a26b-8ea7ddd47209\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.257811 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-ovsdbserver-nb\") pod \"622ca141-49de-4415-a26b-8ea7ddd47209\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.257993 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9np4c\" (UniqueName: \"kubernetes.io/projected/622ca141-49de-4415-a26b-8ea7ddd47209-kube-api-access-9np4c\") pod \"622ca141-49de-4415-a26b-8ea7ddd47209\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.258036 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-dns-swift-storage-0\") pod \"622ca141-49de-4415-a26b-8ea7ddd47209\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.258060 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-dns-svc\") pod \"622ca141-49de-4415-a26b-8ea7ddd47209\" (UID: \"622ca141-49de-4415-a26b-8ea7ddd47209\") " Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.298868 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/622ca141-49de-4415-a26b-8ea7ddd47209-kube-api-access-9np4c" (OuterVolumeSpecName: "kube-api-access-9np4c") pod "622ca141-49de-4415-a26b-8ea7ddd47209" (UID: "622ca141-49de-4415-a26b-8ea7ddd47209"). InnerVolumeSpecName "kube-api-access-9np4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.322807 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bb5f9bf4f-nll8n" event={"ID":"a8614612-c6f9-452f-a9b7-a47bce32ac81","Type":"ContainerStarted","Data":"a8ffbbec137d44b9295950520a3d3047cea34ef6e3443a42d6f4e26ad25f8de4"} Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.325182 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.335320 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66c67d8-rjnw7" event={"ID":"12adc423-1b55-4b56-85a3-32e2aabbc82d","Type":"ContainerStarted","Data":"c9085cb8d8f02768dc30828eabbb8dbd551fe55c70de26cd3c08994c4f6dde59"} Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.364664 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-config" (OuterVolumeSpecName: "config") pod "622ca141-49de-4415-a26b-8ea7ddd47209" (UID: "622ca141-49de-4415-a26b-8ea7ddd47209"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.365274 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9np4c\" (UniqueName: \"kubernetes.io/projected/622ca141-49de-4415-a26b-8ea7ddd47209-kube-api-access-9np4c\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.365303 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.378067 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "622ca141-49de-4415-a26b-8ea7ddd47209" (UID: "622ca141-49de-4415-a26b-8ea7ddd47209"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.385990 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7bb5f9bf4f-nll8n" podStartSLOduration=8.385962574 podStartE2EDuration="8.385962574s" podCreationTimestamp="2025-10-08 14:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:41:53.364105522 +0000 UTC m=+1138.515040609" watchObservedRunningTime="2025-10-08 14:41:53.385962574 +0000 UTC m=+1138.536897651" Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.389367 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "622ca141-49de-4415-a26b-8ea7ddd47209" (UID: "622ca141-49de-4415-a26b-8ea7ddd47209"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.409707 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85bddccb77-p452h" Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.410702 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bddccb77-p452h" event={"ID":"622ca141-49de-4415-a26b-8ea7ddd47209","Type":"ContainerDied","Data":"03d7a23cadab62cdf73353e33c2f6f20884b4b926faab940845bd05ae618b0e0"} Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.410929 4624 scope.go:117] "RemoveContainer" containerID="ee320ab72d0a6e10d78d25d305e1b85e868ecfd9a757dc22f1a1989c9470c779" Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.420196 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "622ca141-49de-4415-a26b-8ea7ddd47209" (UID: "622ca141-49de-4415-a26b-8ea7ddd47209"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.466508 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "622ca141-49de-4415-a26b-8ea7ddd47209" (UID: "622ca141-49de-4415-a26b-8ea7ddd47209"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.468243 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.468343 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.468437 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.468512 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622ca141-49de-4415-a26b-8ea7ddd47209-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.771364 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85bddccb77-p452h"] Oct 08 14:41:53 crc kubenswrapper[4624]: I1008 14:41:53.791963 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85bddccb77-p452h"] Oct 08 14:41:54 crc kubenswrapper[4624]: I1008 14:41:54.443597 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" event={"ID":"9d94066f-cfff-4398-8d02-b47b7ed819ac","Type":"ContainerStarted","Data":"0c08e038f41782aa832f47830a6b0fe71b5e22e9408df381413d4ee178390906"} Oct 08 14:41:54 crc kubenswrapper[4624]: I1008 14:41:54.445003 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:41:54 crc kubenswrapper[4624]: I1008 14:41:54.453436 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86d56d5f7b-2jwqz" event={"ID":"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76","Type":"ContainerStarted","Data":"7c06635aefdafdc9274928ca3a51b5bc87729e28b5f2493d0376603b999ca4fb"} Oct 08 14:41:54 crc kubenswrapper[4624]: I1008 14:41:54.453787 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:41:54 crc kubenswrapper[4624]: I1008 14:41:54.474579 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66c67d8-rjnw7" event={"ID":"12adc423-1b55-4b56-85a3-32e2aabbc82d","Type":"ContainerStarted","Data":"ae0a7846c8f6888214fca98cf26d5ab4ab5d5c4d46888a225ebc818cd464ab41"} Oct 08 14:41:54 crc kubenswrapper[4624]: I1008 14:41:54.475223 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:54 crc kubenswrapper[4624]: I1008 14:41:54.475339 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:41:54 crc kubenswrapper[4624]: I1008 14:41:54.484169 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" podStartSLOduration=10.484151929 podStartE2EDuration="10.484151929s" podCreationTimestamp="2025-10-08 14:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:41:54.474954536 +0000 UTC m=+1139.625889633" watchObservedRunningTime="2025-10-08 14:41:54.484151929 +0000 UTC m=+1139.635087006" Oct 08 14:41:54 crc kubenswrapper[4624]: I1008 14:41:54.489911 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f4595c6f6-t4nns" event={"ID":"cfd45b9a-e3a8-4f53-92bb-4e4dc0580365","Type":"ContainerStarted","Data":"99936fc1b6c6d6055eedf20aba7f710c4328ccf57633bba52f85beac94edf632"} Oct 08 14:41:54 crc kubenswrapper[4624]: I1008 14:41:54.490944 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:54 crc kubenswrapper[4624]: I1008 14:41:54.490985 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:41:54 crc kubenswrapper[4624]: I1008 14:41:54.509940 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-65f89d8d74-ng4cv" event={"ID":"8c703134-b38a-414e-8c09-5702aa32a638","Type":"ContainerStarted","Data":"c3ae7ac2942aedb377278616ed886f523bb072a1a4a6d5d20dd66b740de6f5e5"} Oct 08 14:41:54 crc kubenswrapper[4624]: I1008 14:41:54.514745 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-86d56d5f7b-2jwqz" podStartSLOduration=13.514615228 podStartE2EDuration="13.514615228s" podCreationTimestamp="2025-10-08 14:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:41:54.514136656 +0000 UTC m=+1139.665071733" watchObservedRunningTime="2025-10-08 14:41:54.514615228 +0000 UTC m=+1139.665550315" Oct 08 14:41:54 crc kubenswrapper[4624]: I1008 14:41:54.546777 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-66c67d8-rjnw7" podStartSLOduration=5.54675928 podStartE2EDuration="5.54675928s" podCreationTimestamp="2025-10-08 14:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:41:54.538787468 +0000 UTC m=+1139.689722565" watchObservedRunningTime="2025-10-08 14:41:54.54675928 +0000 UTC m=+1139.697694357" Oct 08 14:41:54 crc kubenswrapper[4624]: I1008 14:41:54.614378 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f6cd65c74-7vqb5" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Oct 08 14:41:54 crc kubenswrapper[4624]: I1008 14:41:54.615367 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7f4595c6f6-t4nns" podStartSLOduration=9.615351541999999 podStartE2EDuration="9.615351542s" podCreationTimestamp="2025-10-08 14:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:41:54.6006267 +0000 UTC m=+1139.751561777" watchObservedRunningTime="2025-10-08 14:41:54.615351542 +0000 UTC m=+1139.766286619" Oct 08 14:41:54 crc kubenswrapper[4624]: I1008 14:41:54.721783 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67f45f8444-g8bbs" podUID="378be2ad-3335-409f-b2eb-60b3997ed4f8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Oct 08 14:41:55 crc kubenswrapper[4624]: I1008 14:41:55.481994 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="622ca141-49de-4415-a26b-8ea7ddd47209" path="/var/lib/kubelet/pods/622ca141-49de-4415-a26b-8ea7ddd47209/volumes" Oct 08 14:41:56 crc kubenswrapper[4624]: I1008 14:41:56.540525 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-7f4595c6f6-t4nns" podUID="cfd45b9a-e3a8-4f53-92bb-4e4dc0580365" containerName="placement-log" probeResult="failure" output="Get \"https://10.217.0.158:8778/\": dial tcp 10.217.0.158:8778: connect: connection refused" Oct 08 14:41:57 crc kubenswrapper[4624]: I1008 14:41:57.547297 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:41:57 crc kubenswrapper[4624]: I1008 14:41:57.570076 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-65f89d8d74-ng4cv" podStartSLOduration=8.570060441 podStartE2EDuration="8.570060441s" podCreationTimestamp="2025-10-08 14:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:41:57.569077056 +0000 UTC m=+1142.720012133" watchObservedRunningTime="2025-10-08 14:41:57.570060441 +0000 UTC m=+1142.720995518" Oct 08 14:42:00 crc kubenswrapper[4624]: I1008 14:42:00.076395 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:42:00 crc kubenswrapper[4624]: I1008 14:42:00.076459 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:42:00 crc kubenswrapper[4624]: I1008 14:42:00.257845 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:42:00 crc kubenswrapper[4624]: I1008 14:42:00.367734 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d44547b9c-2v5ck"] Oct 08 14:42:00 crc kubenswrapper[4624]: I1008 14:42:00.368105 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" podUID="7c18acac-a79e-4dae-97d2-f81c60c2570b" containerName="dnsmasq-dns" containerID="cri-o://ca3363930d4599ac260994fc3b029fcac769028e69bdbdbda0ad3a1a9a4104ff" gracePeriod=10 Oct 08 14:42:00 crc kubenswrapper[4624]: I1008 14:42:00.378818 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5b645c6-bpx66" podUID="c5a6e279-4f8e-4293-868f-018f75fbba17" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 14:42:00 crc kubenswrapper[4624]: I1008 14:42:00.580180 4624 generic.go:334] "Generic (PLEG): container finished" podID="7c18acac-a79e-4dae-97d2-f81c60c2570b" containerID="ca3363930d4599ac260994fc3b029fcac769028e69bdbdbda0ad3a1a9a4104ff" exitCode=0 Oct 08 14:42:00 crc kubenswrapper[4624]: I1008 14:42:00.580487 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" event={"ID":"7c18acac-a79e-4dae-97d2-f81c60c2570b","Type":"ContainerDied","Data":"ca3363930d4599ac260994fc3b029fcac769028e69bdbdbda0ad3a1a9a4104ff"} Oct 08 14:42:02 crc kubenswrapper[4624]: I1008 14:42:02.362996 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7bd5b645c6-bpx66" podUID="c5a6e279-4f8e-4293-868f-018f75fbba17" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 14:42:02 crc kubenswrapper[4624]: I1008 14:42:02.603488 4624 generic.go:334] "Generic (PLEG): container finished" podID="265c0058-98f4-4bcd-b413-8e3633ab56cd" containerID="c8dc6fa2e321c8d676e138cd68e8b4c3dc94c62c7e7783228df72b3f3ed310bf" exitCode=0 Oct 08 14:42:02 crc kubenswrapper[4624]: I1008 14:42:02.603548 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-f6sz5" event={"ID":"265c0058-98f4-4bcd-b413-8e3633ab56cd","Type":"ContainerDied","Data":"c8dc6fa2e321c8d676e138cd68e8b4c3dc94c62c7e7783228df72b3f3ed310bf"} Oct 08 14:42:03 crc kubenswrapper[4624]: I1008 14:42:03.660594 4624 generic.go:334] "Generic (PLEG): container finished" podID="7222761d-c17c-485d-a672-75d7921fbb20" containerID="cbeef4aaa873d131e498e5fb3c1570eac0ea1b83b1be4af86ca4727e8e2f03e3" exitCode=0 Oct 08 14:42:03 crc kubenswrapper[4624]: I1008 14:42:03.660885 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-t4www" event={"ID":"7222761d-c17c-485d-a672-75d7921fbb20","Type":"ContainerDied","Data":"cbeef4aaa873d131e498e5fb3c1570eac0ea1b83b1be4af86ca4727e8e2f03e3"} Oct 08 14:42:04 crc kubenswrapper[4624]: I1008 14:42:04.228280 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:42:04 crc kubenswrapper[4624]: I1008 14:42:04.611913 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f6cd65c74-7vqb5" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Oct 08 14:42:04 crc kubenswrapper[4624]: I1008 14:42:04.641992 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:42:04 crc kubenswrapper[4624]: I1008 14:42:04.722292 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67f45f8444-g8bbs" podUID="378be2ad-3335-409f-b2eb-60b3997ed4f8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Oct 08 14:42:04 crc kubenswrapper[4624]: I1008 14:42:04.920665 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:42:04 crc kubenswrapper[4624]: I1008 14:42:04.931741 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-66c67d8-rjnw7" Oct 08 14:42:05 crc kubenswrapper[4624]: I1008 14:42:05.062694 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7bd5b645c6-bpx66"] Oct 08 14:42:05 crc kubenswrapper[4624]: I1008 14:42:05.241658 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" podUID="7c18acac-a79e-4dae-97d2-f81c60c2570b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Oct 08 14:42:05 crc kubenswrapper[4624]: I1008 14:42:05.688871 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7bd5b645c6-bpx66" podUID="c5a6e279-4f8e-4293-868f-018f75fbba17" containerName="barbican-api-log" containerID="cri-o://9b77812b6fda45db0ff78525807b76ab969945db63ffc6443e7190549589bf18" gracePeriod=30 Oct 08 14:42:05 crc kubenswrapper[4624]: I1008 14:42:05.688983 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7bd5b645c6-bpx66" podUID="c5a6e279-4f8e-4293-868f-018f75fbba17" containerName="barbican-api" containerID="cri-o://d8647e0577baccb8fe28220110e4b0e8182ead94221e07abed0ed82362e36795" gracePeriod=30 Oct 08 14:42:05 crc kubenswrapper[4624]: I1008 14:42:05.728539 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7bd5b645c6-bpx66" podUID="c5a6e279-4f8e-4293-868f-018f75fbba17" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": EOF" Oct 08 14:42:05 crc kubenswrapper[4624]: I1008 14:42:05.728775 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7bd5b645c6-bpx66" podUID="c5a6e279-4f8e-4293-868f-018f75fbba17" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": EOF" Oct 08 14:42:06 crc kubenswrapper[4624]: I1008 14:42:06.704327 4624 generic.go:334] "Generic (PLEG): container finished" podID="c5a6e279-4f8e-4293-868f-018f75fbba17" containerID="9b77812b6fda45db0ff78525807b76ab969945db63ffc6443e7190549589bf18" exitCode=143 Oct 08 14:42:06 crc kubenswrapper[4624]: I1008 14:42:06.704651 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5b645c6-bpx66" event={"ID":"c5a6e279-4f8e-4293-868f-018f75fbba17","Type":"ContainerDied","Data":"9b77812b6fda45db0ff78525807b76ab969945db63ffc6443e7190549589bf18"} Oct 08 14:42:07 crc kubenswrapper[4624]: I1008 14:42:07.745553 4624 generic.go:334] "Generic (PLEG): container finished" podID="cd53e103-7b25-4f61-a0f4-675ace133ab7" containerID="bfa5c132440552840d9c1f0caba4afb6694790cb6c8ae77b3891993bf5eaadd1" exitCode=0 Oct 08 14:42:07 crc kubenswrapper[4624]: I1008 14:42:07.745614 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hv6dc" event={"ID":"cd53e103-7b25-4f61-a0f4-675ace133ab7","Type":"ContainerDied","Data":"bfa5c132440552840d9c1f0caba4afb6694790cb6c8ae77b3891993bf5eaadd1"} Oct 08 14:42:08 crc kubenswrapper[4624]: I1008 14:42:08.219924 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-66c67d8-rjnw7" podUID="12adc423-1b55-4b56-85a3-32e2aabbc82d" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.108195 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-f6sz5" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.149050 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-t4www" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.151876 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.236217 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9bz8\" (UniqueName: \"kubernetes.io/projected/265c0058-98f4-4bcd-b413-8e3633ab56cd-kube-api-access-b9bz8\") pod \"265c0058-98f4-4bcd-b413-8e3633ab56cd\" (UID: \"265c0058-98f4-4bcd-b413-8e3633ab56cd\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.236563 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdrdw\" (UniqueName: \"kubernetes.io/projected/7222761d-c17c-485d-a672-75d7921fbb20-kube-api-access-vdrdw\") pod \"7222761d-c17c-485d-a672-75d7921fbb20\" (UID: \"7222761d-c17c-485d-a672-75d7921fbb20\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.236646 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7222761d-c17c-485d-a672-75d7921fbb20-db-sync-config-data\") pod \"7222761d-c17c-485d-a672-75d7921fbb20\" (UID: \"7222761d-c17c-485d-a672-75d7921fbb20\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.236715 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-config-data\") pod \"cd53e103-7b25-4f61-a0f4-675ace133ab7\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.236746 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7222761d-c17c-485d-a672-75d7921fbb20-config-data\") pod \"7222761d-c17c-485d-a672-75d7921fbb20\" (UID: \"7222761d-c17c-485d-a672-75d7921fbb20\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.236775 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/265c0058-98f4-4bcd-b413-8e3633ab56cd-config-data\") pod \"265c0058-98f4-4bcd-b413-8e3633ab56cd\" (UID: \"265c0058-98f4-4bcd-b413-8e3633ab56cd\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.236800 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7222761d-c17c-485d-a672-75d7921fbb20-combined-ca-bundle\") pod \"7222761d-c17c-485d-a672-75d7921fbb20\" (UID: \"7222761d-c17c-485d-a672-75d7921fbb20\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.236880 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-db-sync-config-data\") pod \"cd53e103-7b25-4f61-a0f4-675ace133ab7\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.236904 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd53e103-7b25-4f61-a0f4-675ace133ab7-etc-machine-id\") pod \"cd53e103-7b25-4f61-a0f4-675ace133ab7\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.236934 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-scripts\") pod \"cd53e103-7b25-4f61-a0f4-675ace133ab7\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.236968 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-combined-ca-bundle\") pod \"cd53e103-7b25-4f61-a0f4-675ace133ab7\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.236999 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/265c0058-98f4-4bcd-b413-8e3633ab56cd-combined-ca-bundle\") pod \"265c0058-98f4-4bcd-b413-8e3633ab56cd\" (UID: \"265c0058-98f4-4bcd-b413-8e3633ab56cd\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.237058 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bqvv\" (UniqueName: \"kubernetes.io/projected/cd53e103-7b25-4f61-a0f4-675ace133ab7-kube-api-access-7bqvv\") pod \"cd53e103-7b25-4f61-a0f4-675ace133ab7\" (UID: \"cd53e103-7b25-4f61-a0f4-675ace133ab7\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.238469 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd53e103-7b25-4f61-a0f4-675ace133ab7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cd53e103-7b25-4f61-a0f4-675ace133ab7" (UID: "cd53e103-7b25-4f61-a0f4-675ace133ab7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.274189 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-scripts" (OuterVolumeSpecName: "scripts") pod "cd53e103-7b25-4f61-a0f4-675ace133ab7" (UID: "cd53e103-7b25-4f61-a0f4-675ace133ab7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.274717 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cd53e103-7b25-4f61-a0f4-675ace133ab7" (UID: "cd53e103-7b25-4f61-a0f4-675ace133ab7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.281745 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7222761d-c17c-485d-a672-75d7921fbb20-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7222761d-c17c-485d-a672-75d7921fbb20" (UID: "7222761d-c17c-485d-a672-75d7921fbb20"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.281873 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd53e103-7b25-4f61-a0f4-675ace133ab7-kube-api-access-7bqvv" (OuterVolumeSpecName: "kube-api-access-7bqvv") pod "cd53e103-7b25-4f61-a0f4-675ace133ab7" (UID: "cd53e103-7b25-4f61-a0f4-675ace133ab7"). InnerVolumeSpecName "kube-api-access-7bqvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.284546 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/265c0058-98f4-4bcd-b413-8e3633ab56cd-kube-api-access-b9bz8" (OuterVolumeSpecName: "kube-api-access-b9bz8") pod "265c0058-98f4-4bcd-b413-8e3633ab56cd" (UID: "265c0058-98f4-4bcd-b413-8e3633ab56cd"). InnerVolumeSpecName "kube-api-access-b9bz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.291188 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7222761d-c17c-485d-a672-75d7921fbb20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7222761d-c17c-485d-a672-75d7921fbb20" (UID: "7222761d-c17c-485d-a672-75d7921fbb20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.291919 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7222761d-c17c-485d-a672-75d7921fbb20-kube-api-access-vdrdw" (OuterVolumeSpecName: "kube-api-access-vdrdw") pod "7222761d-c17c-485d-a672-75d7921fbb20" (UID: "7222761d-c17c-485d-a672-75d7921fbb20"). InnerVolumeSpecName "kube-api-access-vdrdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.340116 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bqvv\" (UniqueName: \"kubernetes.io/projected/cd53e103-7b25-4f61-a0f4-675ace133ab7-kube-api-access-7bqvv\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.340146 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9bz8\" (UniqueName: \"kubernetes.io/projected/265c0058-98f4-4bcd-b413-8e3633ab56cd-kube-api-access-b9bz8\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.340170 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdrdw\" (UniqueName: \"kubernetes.io/projected/7222761d-c17c-485d-a672-75d7921fbb20-kube-api-access-vdrdw\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.340183 4624 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7222761d-c17c-485d-a672-75d7921fbb20-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.340191 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7222761d-c17c-485d-a672-75d7921fbb20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.340200 4624 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.340208 4624 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd53e103-7b25-4f61-a0f4-675ace133ab7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.340216 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.346548 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd53e103-7b25-4f61-a0f4-675ace133ab7" (UID: "cd53e103-7b25-4f61-a0f4-675ace133ab7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.360403 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/265c0058-98f4-4bcd-b413-8e3633ab56cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "265c0058-98f4-4bcd-b413-8e3633ab56cd" (UID: "265c0058-98f4-4bcd-b413-8e3633ab56cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.371528 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7222761d-c17c-485d-a672-75d7921fbb20-config-data" (OuterVolumeSpecName: "config-data") pod "7222761d-c17c-485d-a672-75d7921fbb20" (UID: "7222761d-c17c-485d-a672-75d7921fbb20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.384905 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-config-data" (OuterVolumeSpecName: "config-data") pod "cd53e103-7b25-4f61-a0f4-675ace133ab7" (UID: "cd53e103-7b25-4f61-a0f4-675ace133ab7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.392614 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/265c0058-98f4-4bcd-b413-8e3633ab56cd-config-data" (OuterVolumeSpecName: "config-data") pod "265c0058-98f4-4bcd-b413-8e3633ab56cd" (UID: "265c0058-98f4-4bcd-b413-8e3633ab56cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.442266 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.442308 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/265c0058-98f4-4bcd-b413-8e3633ab56cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.442319 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd53e103-7b25-4f61-a0f4-675ace133ab7-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.442327 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7222761d-c17c-485d-a672-75d7921fbb20-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.442336 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/265c0058-98f4-4bcd-b413-8e3633ab56cd-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.771370 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hv6dc" event={"ID":"cd53e103-7b25-4f61-a0f4-675ace133ab7","Type":"ContainerDied","Data":"2ae41170a3b74cf731ef31bcbd52a78309f0cfb23eee8e36f59752e0f4914529"} Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.771417 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ae41170a3b74cf731ef31bcbd52a78309f0cfb23eee8e36f59752e0f4914529" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.771489 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hv6dc" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.783917 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-t4www" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.783907 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-t4www" event={"ID":"7222761d-c17c-485d-a672-75d7921fbb20","Type":"ContainerDied","Data":"59709d529eaba342635c2dd541195964316bd7d826ba3b19f11dbfa654de9953"} Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.784098 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59709d529eaba342635c2dd541195964316bd7d826ba3b19f11dbfa654de9953" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.787737 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-f6sz5" event={"ID":"265c0058-98f4-4bcd-b413-8e3633ab56cd","Type":"ContainerDied","Data":"2c7ec8b394d394172b1920c780714c871d854655826995719f23c86628a91c05"} Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.787781 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c7ec8b394d394172b1920c780714c871d854655826995719f23c86628a91c05" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.787842 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-f6sz5" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.823678 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:42:10 crc kubenswrapper[4624]: E1008 14:42:10.826573 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Oct 08 14:42:10 crc kubenswrapper[4624]: E1008 14:42:10.826773 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6x59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(4bf057e0-e090-498a-bb8b-32835373555c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 08 14:42:10 crc kubenswrapper[4624]: E1008 14:42:10.828426 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="4bf057e0-e090-498a-bb8b-32835373555c" Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.960348 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-dns-svc\") pod \"7c18acac-a79e-4dae-97d2-f81c60c2570b\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.960931 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh2sx\" (UniqueName: \"kubernetes.io/projected/7c18acac-a79e-4dae-97d2-f81c60c2570b-kube-api-access-mh2sx\") pod \"7c18acac-a79e-4dae-97d2-f81c60c2570b\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.961052 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-dns-swift-storage-0\") pod \"7c18acac-a79e-4dae-97d2-f81c60c2570b\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.961214 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-config\") pod \"7c18acac-a79e-4dae-97d2-f81c60c2570b\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.961344 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-ovsdbserver-nb\") pod \"7c18acac-a79e-4dae-97d2-f81c60c2570b\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.961618 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-ovsdbserver-sb\") pod \"7c18acac-a79e-4dae-97d2-f81c60c2570b\" (UID: \"7c18acac-a79e-4dae-97d2-f81c60c2570b\") " Oct 08 14:42:10 crc kubenswrapper[4624]: I1008 14:42:10.970863 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c18acac-a79e-4dae-97d2-f81c60c2570b-kube-api-access-mh2sx" (OuterVolumeSpecName: "kube-api-access-mh2sx") pod "7c18acac-a79e-4dae-97d2-f81c60c2570b" (UID: "7c18acac-a79e-4dae-97d2-f81c60c2570b"). InnerVolumeSpecName "kube-api-access-mh2sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.069115 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh2sx\" (UniqueName: \"kubernetes.io/projected/7c18acac-a79e-4dae-97d2-f81c60c2570b-kube-api-access-mh2sx\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.092507 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-config" (OuterVolumeSpecName: "config") pod "7c18acac-a79e-4dae-97d2-f81c60c2570b" (UID: "7c18acac-a79e-4dae-97d2-f81c60c2570b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.102459 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7c18acac-a79e-4dae-97d2-f81c60c2570b" (UID: "7c18acac-a79e-4dae-97d2-f81c60c2570b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.117724 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7c18acac-a79e-4dae-97d2-f81c60c2570b" (UID: "7c18acac-a79e-4dae-97d2-f81c60c2570b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.127913 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7c18acac-a79e-4dae-97d2-f81c60c2570b" (UID: "7c18acac-a79e-4dae-97d2-f81c60c2570b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.141527 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c18acac-a79e-4dae-97d2-f81c60c2570b" (UID: "7c18acac-a79e-4dae-97d2-f81c60c2570b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.170973 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.170999 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.171008 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.171018 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.171026 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c18acac-a79e-4dae-97d2-f81c60c2570b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.702768 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 14:42:11 crc kubenswrapper[4624]: E1008 14:42:11.703403 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c18acac-a79e-4dae-97d2-f81c60c2570b" containerName="init" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.703419 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c18acac-a79e-4dae-97d2-f81c60c2570b" containerName="init" Oct 08 14:42:11 crc kubenswrapper[4624]: E1008 14:42:11.703427 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265c0058-98f4-4bcd-b413-8e3633ab56cd" containerName="heat-db-sync" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.703434 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="265c0058-98f4-4bcd-b413-8e3633ab56cd" containerName="heat-db-sync" Oct 08 14:42:11 crc kubenswrapper[4624]: E1008 14:42:11.703442 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd53e103-7b25-4f61-a0f4-675ace133ab7" containerName="cinder-db-sync" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.703448 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd53e103-7b25-4f61-a0f4-675ace133ab7" containerName="cinder-db-sync" Oct 08 14:42:11 crc kubenswrapper[4624]: E1008 14:42:11.703479 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c18acac-a79e-4dae-97d2-f81c60c2570b" containerName="dnsmasq-dns" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.703485 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c18acac-a79e-4dae-97d2-f81c60c2570b" containerName="dnsmasq-dns" Oct 08 14:42:11 crc kubenswrapper[4624]: E1008 14:42:11.703495 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7222761d-c17c-485d-a672-75d7921fbb20" containerName="glance-db-sync" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.703502 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7222761d-c17c-485d-a672-75d7921fbb20" containerName="glance-db-sync" Oct 08 14:42:11 crc kubenswrapper[4624]: E1008 14:42:11.703522 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="622ca141-49de-4415-a26b-8ea7ddd47209" containerName="init" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.703527 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="622ca141-49de-4415-a26b-8ea7ddd47209" containerName="init" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.703693 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd53e103-7b25-4f61-a0f4-675ace133ab7" containerName="cinder-db-sync" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.703707 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="265c0058-98f4-4bcd-b413-8e3633ab56cd" containerName="heat-db-sync" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.703721 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="622ca141-49de-4415-a26b-8ea7ddd47209" containerName="init" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.703733 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="7222761d-c17c-485d-a672-75d7921fbb20" containerName="glance-db-sync" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.703743 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c18acac-a79e-4dae-97d2-f81c60c2570b" containerName="dnsmasq-dns" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.704704 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.711117 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.717057 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.717243 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.717257 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ts69p" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.747277 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.774396 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b7d955dc-c9svl"] Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.780241 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.780327 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfhsc\" (UniqueName: \"kubernetes.io/projected/c09025d6-b523-4828-ba54-217da2a5bfb3-kube-api-access-cfhsc\") pod \"cinder-scheduler-0\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.780358 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c09025d6-b523-4828-ba54-217da2a5bfb3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.780379 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.780434 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-config-data\") pod \"cinder-scheduler-0\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.780461 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-scripts\") pod \"cinder-scheduler-0\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.792750 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.816962 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" event={"ID":"255b203d-1921-40ab-8c4f-7f582a647651","Type":"ContainerStarted","Data":"d19e80d4eaf372cc6157efbc286fc8976c12e6bd1d4d43da426d01faf0ac1d33"} Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.830295 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bf057e0-e090-498a-bb8b-32835373555c" containerName="ceilometer-notification-agent" containerID="cri-o://3a2a76d9a38b0933057153192f3355b4d87f0d49f14deab96b6422411aea652b" gracePeriod=30 Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.830780 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.831761 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" event={"ID":"7c18acac-a79e-4dae-97d2-f81c60c2570b","Type":"ContainerDied","Data":"e5493dabe7055eaaff17ec8c196fd8f2e26b803d0b15c33e7fcd5b08102028a3"} Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.831826 4624 scope.go:117] "RemoveContainer" containerID="ca3363930d4599ac260994fc3b029fcac769028e69bdbdbda0ad3a1a9a4104ff" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.831966 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bf057e0-e090-498a-bb8b-32835373555c" containerName="sg-core" containerID="cri-o://fbbfffa5d6c5d9242bd7677594dabbe544f2224ed6ed82d41105fdb2e5e97485" gracePeriod=30 Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.881986 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-ovsdbserver-nb\") pod \"dnsmasq-dns-b7d955dc-c9svl\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.882054 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-ovsdbserver-sb\") pod \"dnsmasq-dns-b7d955dc-c9svl\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.882085 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfhsc\" (UniqueName: \"kubernetes.io/projected/c09025d6-b523-4828-ba54-217da2a5bfb3-kube-api-access-cfhsc\") pod \"cinder-scheduler-0\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.882117 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-dns-swift-storage-0\") pod \"dnsmasq-dns-b7d955dc-c9svl\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.882148 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c09025d6-b523-4828-ba54-217da2a5bfb3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.882172 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.882249 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-config-data\") pod \"cinder-scheduler-0\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.882283 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-dns-svc\") pod \"dnsmasq-dns-b7d955dc-c9svl\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.882312 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-scripts\") pod \"cinder-scheduler-0\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.882330 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg9dr\" (UniqueName: \"kubernetes.io/projected/e96343ec-94ad-405b-a4b6-5756ec03a482-kube-api-access-kg9dr\") pod \"dnsmasq-dns-b7d955dc-c9svl\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.882376 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-config\") pod \"dnsmasq-dns-b7d955dc-c9svl\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.882405 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.887552 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c09025d6-b523-4828-ba54-217da2a5bfb3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.894797 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b7d955dc-c9svl"] Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.901619 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.907849 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.911115 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-scripts\") pod \"cinder-scheduler-0\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.912011 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-config-data\") pod \"cinder-scheduler-0\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.914334 4624 scope.go:117] "RemoveContainer" containerID="9e4bb72326e26b2f3f8ac9531db77410c6912d4afb6dba0521132b94343c9678" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.938689 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfhsc\" (UniqueName: \"kubernetes.io/projected/c09025d6-b523-4828-ba54-217da2a5bfb3-kube-api-access-cfhsc\") pod \"cinder-scheduler-0\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.985478 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-dns-svc\") pod \"dnsmasq-dns-b7d955dc-c9svl\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.985521 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg9dr\" (UniqueName: \"kubernetes.io/projected/e96343ec-94ad-405b-a4b6-5756ec03a482-kube-api-access-kg9dr\") pod \"dnsmasq-dns-b7d955dc-c9svl\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.985548 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-config\") pod \"dnsmasq-dns-b7d955dc-c9svl\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.985605 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-ovsdbserver-nb\") pod \"dnsmasq-dns-b7d955dc-c9svl\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.985648 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-ovsdbserver-sb\") pod \"dnsmasq-dns-b7d955dc-c9svl\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.985670 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-dns-swift-storage-0\") pod \"dnsmasq-dns-b7d955dc-c9svl\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.986474 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-dns-swift-storage-0\") pod \"dnsmasq-dns-b7d955dc-c9svl\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.987132 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-dns-svc\") pod \"dnsmasq-dns-b7d955dc-c9svl\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.987899 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-config\") pod \"dnsmasq-dns-b7d955dc-c9svl\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:11 crc kubenswrapper[4624]: I1008 14:42:11.988381 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-ovsdbserver-nb\") pod \"dnsmasq-dns-b7d955dc-c9svl\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.010902 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-ovsdbserver-sb\") pod \"dnsmasq-dns-b7d955dc-c9svl\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.034037 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.099923 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d44547b9c-2v5ck"] Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.116602 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg9dr\" (UniqueName: \"kubernetes.io/projected/e96343ec-94ad-405b-a4b6-5756ec03a482-kube-api-access-kg9dr\") pod \"dnsmasq-dns-b7d955dc-c9svl\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.134002 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.145286 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d44547b9c-2v5ck"] Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.183903 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b7d955dc-c9svl"] Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.285605 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78cd4749fc-jpfh9"] Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.310506 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.335381 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.379764 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd4749fc-jpfh9"] Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.402658 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bn7d\" (UniqueName: \"kubernetes.io/projected/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-kube-api-access-6bn7d\") pod \"dnsmasq-dns-78cd4749fc-jpfh9\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.402711 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd4749fc-jpfh9\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.402737 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-config\") pod \"dnsmasq-dns-78cd4749fc-jpfh9\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.402784 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd4749fc-jpfh9\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.402810 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-dns-svc\") pod \"dnsmasq-dns-78cd4749fc-jpfh9\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.402836 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd4749fc-jpfh9\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.506121 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bn7d\" (UniqueName: \"kubernetes.io/projected/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-kube-api-access-6bn7d\") pod \"dnsmasq-dns-78cd4749fc-jpfh9\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.506479 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd4749fc-jpfh9\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.506520 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-config\") pod \"dnsmasq-dns-78cd4749fc-jpfh9\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.506614 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd4749fc-jpfh9\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.507709 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-dns-svc\") pod \"dnsmasq-dns-78cd4749fc-jpfh9\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.507771 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd4749fc-jpfh9\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.509384 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-config\") pod \"dnsmasq-dns-78cd4749fc-jpfh9\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.510316 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd4749fc-jpfh9\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.511463 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-dns-svc\") pod \"dnsmasq-dns-78cd4749fc-jpfh9\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.511705 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd4749fc-jpfh9\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.520131 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd4749fc-jpfh9\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.588205 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bn7d\" (UniqueName: \"kubernetes.io/projected/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-kube-api-access-6bn7d\") pod \"dnsmasq-dns-78cd4749fc-jpfh9\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.622151 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.626753 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.631060 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.640161 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.712314 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jnn4\" (UniqueName: \"kubernetes.io/projected/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-kube-api-access-6jnn4\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.712619 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.712707 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.712735 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-config-data\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.712766 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-config-data-custom\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.712800 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-scripts\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.712816 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-logs\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.712981 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.746671 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.750662 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.760228 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.761759 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.762156 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-9j69m" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.771467 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5b645c6-bpx66" podUID="c5a6e279-4f8e-4293-868f-018f75fbba17" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:59184->10.217.0.157:9311: read: connection reset by peer" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.772209 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5b645c6-bpx66" podUID="c5a6e279-4f8e-4293-868f-018f75fbba17" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:59188->10.217.0.157:9311: read: connection reset by peer" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.819114 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-config-data\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.819178 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-config-data-custom\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.819222 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-scripts\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.819240 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-logs\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.819308 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jnn4\" (UniqueName: \"kubernetes.io/projected/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-kube-api-access-6jnn4\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.819333 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.819396 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.822081 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.822320 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-logs\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.832521 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.832585 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.865024 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-config-data\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.865588 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-config-data-custom\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.871245 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jnn4\" (UniqueName: \"kubernetes.io/projected/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-kube-api-access-6jnn4\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.873289 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-scripts\") pod \"cinder-api-0\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " pod="openstack/cinder-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.921215 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg89z\" (UniqueName: \"kubernetes.io/projected/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-kube-api-access-gg89z\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.921278 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-logs\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.921345 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-config-data\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.921406 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.921440 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-scripts\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.921461 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.921514 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.923923 4624 generic.go:334] "Generic (PLEG): container finished" podID="4bf057e0-e090-498a-bb8b-32835373555c" containerID="fbbfffa5d6c5d9242bd7677594dabbe544f2224ed6ed82d41105fdb2e5e97485" exitCode=2 Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.924021 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf057e0-e090-498a-bb8b-32835373555c","Type":"ContainerDied","Data":"fbbfffa5d6c5d9242bd7677594dabbe544f2224ed6ed82d41105fdb2e5e97485"} Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.934750 4624 generic.go:334] "Generic (PLEG): container finished" podID="c5a6e279-4f8e-4293-868f-018f75fbba17" containerID="d8647e0577baccb8fe28220110e4b0e8182ead94221e07abed0ed82362e36795" exitCode=0 Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.934818 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5b645c6-bpx66" event={"ID":"c5a6e279-4f8e-4293-868f-018f75fbba17","Type":"ContainerDied","Data":"d8647e0577baccb8fe28220110e4b0e8182ead94221e07abed0ed82362e36795"} Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.954477 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-769b9cd88f-r425v" event={"ID":"2051dd96-3f4a-42b1-9802-602bd9693aec","Type":"ContainerStarted","Data":"3462768b53b729aa33fee68bb1575536e1656f69cd9ff3fec1a5939be0cc5d00"} Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.954535 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-769b9cd88f-r425v" event={"ID":"2051dd96-3f4a-42b1-9802-602bd9693aec","Type":"ContainerStarted","Data":"3eec7fbd3e6312edd7d112b7e1d8d70d0391652fe8b503433fa6508715f43fe9"} Oct 08 14:42:12 crc kubenswrapper[4624]: I1008 14:42:12.976696 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" event={"ID":"255b203d-1921-40ab-8c4f-7f582a647651","Type":"ContainerStarted","Data":"e4ab2d053fa2cd892297b0212f52bbe176dd57d0bf668edf9dfcc6089b54096a"} Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:12.984290 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:12.986304 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-769b9cd88f-r425v" podStartSLOduration=9.018081594 podStartE2EDuration="28.986286945s" podCreationTimestamp="2025-10-08 14:41:44 +0000 UTC" firstStartedPulling="2025-10-08 14:41:50.83296465 +0000 UTC m=+1135.983899727" lastFinishedPulling="2025-10-08 14:42:10.801170001 +0000 UTC m=+1155.952105078" observedRunningTime="2025-10-08 14:42:12.984848648 +0000 UTC m=+1158.135783725" watchObservedRunningTime="2025-10-08 14:42:12.986286945 +0000 UTC m=+1158.137222022" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.039146 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.039233 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-scripts\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.039267 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.039381 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.039542 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg89z\" (UniqueName: \"kubernetes.io/projected/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-kube-api-access-gg89z\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.039598 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-logs\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.039734 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-config-data\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.041963 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.042820 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.055905 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-logs\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.073227 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.078361 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-scripts\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.116429 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-config-data\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.135697 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg89z\" (UniqueName: \"kubernetes.io/projected/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-kube-api-access-gg89z\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.157683 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-db88bb74-6vl6r" podStartSLOduration=8.741221023 podStartE2EDuration="29.157659983s" podCreationTimestamp="2025-10-08 14:41:44 +0000 UTC" firstStartedPulling="2025-10-08 14:41:50.37994894 +0000 UTC m=+1135.530884017" lastFinishedPulling="2025-10-08 14:42:10.79638789 +0000 UTC m=+1155.947322977" observedRunningTime="2025-10-08 14:42:13.037401326 +0000 UTC m=+1158.188336413" watchObservedRunningTime="2025-10-08 14:42:13.157659983 +0000 UTC m=+1158.308595060" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.165808 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.210573 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.270228 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.500102 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c18acac-a79e-4dae-97d2-f81c60c2570b" path="/var/lib/kubelet/pods/7c18acac-a79e-4dae-97d2-f81c60c2570b/volumes" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.512398 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b7d955dc-c9svl"] Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.512446 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.523206 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.523311 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.526893 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.584540 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c18ac300-a93d-48e2-84fa-9cc77d32e296-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.584766 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c18ac300-a93d-48e2-84fa-9cc77d32e296-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.584959 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.585010 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c18ac300-a93d-48e2-84fa-9cc77d32e296-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.585041 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqqpb\" (UniqueName: \"kubernetes.io/projected/c18ac300-a93d-48e2-84fa-9cc77d32e296-kube-api-access-cqqpb\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.585066 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c18ac300-a93d-48e2-84fa-9cc77d32e296-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.585109 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c18ac300-a93d-48e2-84fa-9cc77d32e296-logs\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.688650 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c18ac300-a93d-48e2-84fa-9cc77d32e296-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.689028 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.689055 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c18ac300-a93d-48e2-84fa-9cc77d32e296-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.689074 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqqpb\" (UniqueName: \"kubernetes.io/projected/c18ac300-a93d-48e2-84fa-9cc77d32e296-kube-api-access-cqqpb\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.689089 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c18ac300-a93d-48e2-84fa-9cc77d32e296-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.689118 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c18ac300-a93d-48e2-84fa-9cc77d32e296-logs\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.689158 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c18ac300-a93d-48e2-84fa-9cc77d32e296-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.693049 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.695722 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c18ac300-a93d-48e2-84fa-9cc77d32e296-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.696205 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c18ac300-a93d-48e2-84fa-9cc77d32e296-logs\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.721714 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c18ac300-a93d-48e2-84fa-9cc77d32e296-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.723330 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c18ac300-a93d-48e2-84fa-9cc77d32e296-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.724730 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c18ac300-a93d-48e2-84fa-9cc77d32e296-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.740090 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqqpb\" (UniqueName: \"kubernetes.io/projected/c18ac300-a93d-48e2-84fa-9cc77d32e296-kube-api-access-cqqpb\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.794924 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.828776 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.857522 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.890727 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd4749fc-jpfh9"] Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.892828 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a6e279-4f8e-4293-868f-018f75fbba17-combined-ca-bundle\") pod \"c5a6e279-4f8e-4293-868f-018f75fbba17\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.892906 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5a6e279-4f8e-4293-868f-018f75fbba17-logs\") pod \"c5a6e279-4f8e-4293-868f-018f75fbba17\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.892965 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5a6e279-4f8e-4293-868f-018f75fbba17-config-data-custom\") pod \"c5a6e279-4f8e-4293-868f-018f75fbba17\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.893026 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxzgc\" (UniqueName: \"kubernetes.io/projected/c5a6e279-4f8e-4293-868f-018f75fbba17-kube-api-access-fxzgc\") pod \"c5a6e279-4f8e-4293-868f-018f75fbba17\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.893169 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a6e279-4f8e-4293-868f-018f75fbba17-config-data\") pod \"c5a6e279-4f8e-4293-868f-018f75fbba17\" (UID: \"c5a6e279-4f8e-4293-868f-018f75fbba17\") " Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.895030 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a6e279-4f8e-4293-868f-018f75fbba17-logs" (OuterVolumeSpecName: "logs") pod "c5a6e279-4f8e-4293-868f-018f75fbba17" (UID: "c5a6e279-4f8e-4293-868f-018f75fbba17"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.913806 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a6e279-4f8e-4293-868f-018f75fbba17-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c5a6e279-4f8e-4293-868f-018f75fbba17" (UID: "c5a6e279-4f8e-4293-868f-018f75fbba17"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:13 crc kubenswrapper[4624]: I1008 14:42:13.913882 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a6e279-4f8e-4293-868f-018f75fbba17-kube-api-access-fxzgc" (OuterVolumeSpecName: "kube-api-access-fxzgc") pod "c5a6e279-4f8e-4293-868f-018f75fbba17" (UID: "c5a6e279-4f8e-4293-868f-018f75fbba17"). InnerVolumeSpecName "kube-api-access-fxzgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.001298 4624 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5a6e279-4f8e-4293-868f-018f75fbba17-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.001328 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxzgc\" (UniqueName: \"kubernetes.io/projected/c5a6e279-4f8e-4293-868f-018f75fbba17-kube-api-access-fxzgc\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.001338 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5a6e279-4f8e-4293-868f-018f75fbba17-logs\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.092310 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.121674 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a6e279-4f8e-4293-868f-018f75fbba17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5a6e279-4f8e-4293-868f-018f75fbba17" (UID: "c5a6e279-4f8e-4293-868f-018f75fbba17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.162097 4624 generic.go:334] "Generic (PLEG): container finished" podID="4bf057e0-e090-498a-bb8b-32835373555c" containerID="3a2a76d9a38b0933057153192f3355b4d87f0d49f14deab96b6422411aea652b" exitCode=0 Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.162979 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf057e0-e090-498a-bb8b-32835373555c","Type":"ContainerDied","Data":"3a2a76d9a38b0933057153192f3355b4d87f0d49f14deab96b6422411aea652b"} Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.188894 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5b645c6-bpx66" event={"ID":"c5a6e279-4f8e-4293-868f-018f75fbba17","Type":"ContainerDied","Data":"b2b03e4ca5636731594d53ed44c58c2b6c3409d33665f3b50996850375e5cc53"} Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.188945 4624 scope.go:117] "RemoveContainer" containerID="d8647e0577baccb8fe28220110e4b0e8182ead94221e07abed0ed82362e36795" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.189109 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bd5b645c6-bpx66" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.227262 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a6e279-4f8e-4293-868f-018f75fbba17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.230370 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b7d955dc-c9svl" event={"ID":"e96343ec-94ad-405b-a4b6-5756ec03a482","Type":"ContainerStarted","Data":"039ea6da60216c82ca6e79cd6908903f9472ce3ce461ebfa88341a51626e9d01"} Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.259604 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c09025d6-b523-4828-ba54-217da2a5bfb3","Type":"ContainerStarted","Data":"269fc5fb413022bf3a19c98794af1d5a5bf36d6aec361f4403ac08104bba85c5"} Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.263756 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a6e279-4f8e-4293-868f-018f75fbba17-config-data" (OuterVolumeSpecName: "config-data") pod "c5a6e279-4f8e-4293-868f-018f75fbba17" (UID: "c5a6e279-4f8e-4293-868f-018f75fbba17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.272099 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" event={"ID":"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4","Type":"ContainerStarted","Data":"9ad57c2a1dce0967540a2db5b79bff65c22ddfeea43e268438202fd963f9c0da"} Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.345702 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a6e279-4f8e-4293-868f-018f75fbba17-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.520624 4624 scope.go:117] "RemoveContainer" containerID="9b77812b6fda45db0ff78525807b76ab969945db63ffc6443e7190549589bf18" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.567726 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7bd5b645c6-bpx66"] Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.612037 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7bd5b645c6-bpx66"] Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.612743 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f6cd65c74-7vqb5" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.612809 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.613537 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"6ec33217cf79e738ccd4fb8b5dfdb26af9d5223b52e13ca9693c35de2207761b"} pod="openstack/horizon-6f6cd65c74-7vqb5" containerMessage="Container horizon failed startup probe, will be restarted" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.613572 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f6cd65c74-7vqb5" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" containerID="cri-o://6ec33217cf79e738ccd4fb8b5dfdb26af9d5223b52e13ca9693c35de2207761b" gracePeriod=30 Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.649884 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.722791 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67f45f8444-g8bbs" podUID="378be2ad-3335-409f-b2eb-60b3997ed4f8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.722883 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.723936 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"f9f47f6fddd5455b79c7a8df01c2e3793521a20e5f22d936d7fe100acfb88684"} pod="openstack/horizon-67f45f8444-g8bbs" containerMessage="Container horizon failed startup probe, will be restarted" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.723975 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67f45f8444-g8bbs" podUID="378be2ad-3335-409f-b2eb-60b3997ed4f8" containerName="horizon" containerID="cri-o://f9f47f6fddd5455b79c7a8df01c2e3793521a20e5f22d936d7fe100acfb88684" gracePeriod=30 Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.764159 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-sg-core-conf-yaml\") pod \"4bf057e0-e090-498a-bb8b-32835373555c\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.764290 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-combined-ca-bundle\") pod \"4bf057e0-e090-498a-bb8b-32835373555c\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.764332 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-scripts\") pod \"4bf057e0-e090-498a-bb8b-32835373555c\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.764357 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-config-data\") pod \"4bf057e0-e090-498a-bb8b-32835373555c\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.764381 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6x59\" (UniqueName: \"kubernetes.io/projected/4bf057e0-e090-498a-bb8b-32835373555c-kube-api-access-z6x59\") pod \"4bf057e0-e090-498a-bb8b-32835373555c\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.764456 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf057e0-e090-498a-bb8b-32835373555c-run-httpd\") pod \"4bf057e0-e090-498a-bb8b-32835373555c\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.764493 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf057e0-e090-498a-bb8b-32835373555c-log-httpd\") pod \"4bf057e0-e090-498a-bb8b-32835373555c\" (UID: \"4bf057e0-e090-498a-bb8b-32835373555c\") " Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.766027 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf057e0-e090-498a-bb8b-32835373555c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4bf057e0-e090-498a-bb8b-32835373555c" (UID: "4bf057e0-e090-498a-bb8b-32835373555c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.768505 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf057e0-e090-498a-bb8b-32835373555c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4bf057e0-e090-498a-bb8b-32835373555c" (UID: "4bf057e0-e090-498a-bb8b-32835373555c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.824964 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf057e0-e090-498a-bb8b-32835373555c-kube-api-access-z6x59" (OuterVolumeSpecName: "kube-api-access-z6x59") pod "4bf057e0-e090-498a-bb8b-32835373555c" (UID: "4bf057e0-e090-498a-bb8b-32835373555c"). InnerVolumeSpecName "kube-api-access-z6x59". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.829190 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-scripts" (OuterVolumeSpecName: "scripts") pod "4bf057e0-e090-498a-bb8b-32835373555c" (UID: "4bf057e0-e090-498a-bb8b-32835373555c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.869467 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.869997 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6x59\" (UniqueName: \"kubernetes.io/projected/4bf057e0-e090-498a-bb8b-32835373555c-kube-api-access-z6x59\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.870136 4624 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf057e0-e090-498a-bb8b-32835373555c-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.870237 4624 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf057e0-e090-498a-bb8b-32835373555c-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.893601 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.969884 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bf057e0-e090-498a-bb8b-32835373555c" (UID: "4bf057e0-e090-498a-bb8b-32835373555c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:14 crc kubenswrapper[4624]: I1008 14:42:14.971822 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.004831 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-config-data" (OuterVolumeSpecName: "config-data") pod "4bf057e0-e090-498a-bb8b-32835373555c" (UID: "4bf057e0-e090-498a-bb8b-32835373555c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.047381 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4bf057e0-e090-498a-bb8b-32835373555c" (UID: "4bf057e0-e090-498a-bb8b-32835373555c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.074397 4624 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.074429 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf057e0-e090-498a-bb8b-32835373555c-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.260479 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-d44547b9c-2v5ck" podUID="7c18acac-a79e-4dae-97d2-f81c60c2570b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: i/o timeout" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.393649 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.523175 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a6e279-4f8e-4293-868f-018f75fbba17" path="/var/lib/kubelet/pods/c5a6e279-4f8e-4293-868f-018f75fbba17/volumes" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.534475 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b7d955dc-c9svl" podUID="e96343ec-94ad-405b-a4b6-5756ec03a482" containerName="init" containerID="cri-o://ab3360f642bb913fdd43b284baff4f97abdb80db123b7d1578d0fb653b96dc9c" gracePeriod=10 Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.539721 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b7d955dc-c9svl" event={"ID":"e96343ec-94ad-405b-a4b6-5756ec03a482","Type":"ContainerStarted","Data":"ab3360f642bb913fdd43b284baff4f97abdb80db123b7d1578d0fb653b96dc9c"} Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.588888 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d","Type":"ContainerStarted","Data":"b8bfcd12f7cf69a412ac99e894f064079bb89185934be51167413a380e4e1882"} Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.635691 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d","Type":"ContainerStarted","Data":"c1745799f332517908777217327053eb50e3ef734c6254f3fcff707cd709da47"} Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.639254 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf057e0-e090-498a-bb8b-32835373555c","Type":"ContainerDied","Data":"3f722bc87a97d43428d1b70c02307aea5c320fd85a197adb2cccbbfc6e0d343e"} Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.639299 4624 scope.go:117] "RemoveContainer" containerID="fbbfffa5d6c5d9242bd7677594dabbe544f2224ed6ed82d41105fdb2e5e97485" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.639436 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.850948 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.874407 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.888905 4624 scope.go:117] "RemoveContainer" containerID="3a2a76d9a38b0933057153192f3355b4d87f0d49f14deab96b6422411aea652b" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.918582 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:42:15 crc kubenswrapper[4624]: E1008 14:42:15.919042 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a6e279-4f8e-4293-868f-018f75fbba17" containerName="barbican-api-log" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.919059 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a6e279-4f8e-4293-868f-018f75fbba17" containerName="barbican-api-log" Oct 08 14:42:15 crc kubenswrapper[4624]: E1008 14:42:15.919083 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf057e0-e090-498a-bb8b-32835373555c" containerName="sg-core" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.919094 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf057e0-e090-498a-bb8b-32835373555c" containerName="sg-core" Oct 08 14:42:15 crc kubenswrapper[4624]: E1008 14:42:15.919115 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a6e279-4f8e-4293-868f-018f75fbba17" containerName="barbican-api" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.919123 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a6e279-4f8e-4293-868f-018f75fbba17" containerName="barbican-api" Oct 08 14:42:15 crc kubenswrapper[4624]: E1008 14:42:15.919137 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf057e0-e090-498a-bb8b-32835373555c" containerName="ceilometer-notification-agent" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.919144 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf057e0-e090-498a-bb8b-32835373555c" containerName="ceilometer-notification-agent" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.919383 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a6e279-4f8e-4293-868f-018f75fbba17" containerName="barbican-api-log" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.919405 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf057e0-e090-498a-bb8b-32835373555c" containerName="ceilometer-notification-agent" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.919426 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf057e0-e090-498a-bb8b-32835373555c" containerName="sg-core" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.919439 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a6e279-4f8e-4293-868f-018f75fbba17" containerName="barbican-api" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.927198 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.932092 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.932580 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 14:42:15 crc kubenswrapper[4624]: I1008 14:42:15.943163 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.037726 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/551042f0-9ae9-49b7-ab92-e1b775c7e742-run-httpd\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.037885 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-config-data\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.037908 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.037952 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27bn4\" (UniqueName: \"kubernetes.io/projected/551042f0-9ae9-49b7-ab92-e1b775c7e742-kube-api-access-27bn4\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.038040 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-scripts\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.038082 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/551042f0-9ae9-49b7-ab92-e1b775c7e742-log-httpd\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.038104 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.147939 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/551042f0-9ae9-49b7-ab92-e1b775c7e742-log-httpd\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.148299 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.148438 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/551042f0-9ae9-49b7-ab92-e1b775c7e742-run-httpd\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.148498 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-config-data\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.148528 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.148584 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27bn4\" (UniqueName: \"kubernetes.io/projected/551042f0-9ae9-49b7-ab92-e1b775c7e742-kube-api-access-27bn4\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.148616 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-scripts\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.148499 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/551042f0-9ae9-49b7-ab92-e1b775c7e742-log-httpd\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.150165 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/551042f0-9ae9-49b7-ab92-e1b775c7e742-run-httpd\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.154388 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-config-data\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.162919 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-scripts\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.163355 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.171167 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.171198 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27bn4\" (UniqueName: \"kubernetes.io/projected/551042f0-9ae9-49b7-ab92-e1b775c7e742-kube-api-access-27bn4\") pod \"ceilometer-0\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.305917 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.827153 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.832582 4624 generic.go:334] "Generic (PLEG): container finished" podID="e96343ec-94ad-405b-a4b6-5756ec03a482" containerID="ab3360f642bb913fdd43b284baff4f97abdb80db123b7d1578d0fb653b96dc9c" exitCode=0 Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.832663 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b7d955dc-c9svl" event={"ID":"e96343ec-94ad-405b-a4b6-5756ec03a482","Type":"ContainerDied","Data":"ab3360f642bb913fdd43b284baff4f97abdb80db123b7d1578d0fb653b96dc9c"} Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.832707 4624 scope.go:117] "RemoveContainer" containerID="ab3360f642bb913fdd43b284baff4f97abdb80db123b7d1578d0fb653b96dc9c" Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.859553 4624 generic.go:334] "Generic (PLEG): container finished" podID="c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4" containerID="6a359690dd7c0febd3d3529f0dfa2e7753110b2f2ea52fdb96dcf172b3324128" exitCode=0 Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.859613 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" event={"ID":"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4","Type":"ContainerDied","Data":"6a359690dd7c0febd3d3529f0dfa2e7753110b2f2ea52fdb96dcf172b3324128"} Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.956875 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d","Type":"ContainerStarted","Data":"6d378d32eaebd51119d466c0eeabcd4c13202c9c9bf1f8463d828cc2c701e0db"} Oct 08 14:42:16 crc kubenswrapper[4624]: I1008 14:42:16.962906 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c18ac300-a93d-48e2-84fa-9cc77d32e296","Type":"ContainerStarted","Data":"9ed91ed067d36eed67af0786025319e47a86728ff0b3f1545d1a9dcf4d1f568c"} Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.022271 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.034978 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-ovsdbserver-sb\") pod \"e96343ec-94ad-405b-a4b6-5756ec03a482\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.035051 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-dns-svc\") pod \"e96343ec-94ad-405b-a4b6-5756ec03a482\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.035124 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-ovsdbserver-nb\") pod \"e96343ec-94ad-405b-a4b6-5756ec03a482\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.035182 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg9dr\" (UniqueName: \"kubernetes.io/projected/e96343ec-94ad-405b-a4b6-5756ec03a482-kube-api-access-kg9dr\") pod \"e96343ec-94ad-405b-a4b6-5756ec03a482\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.035283 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-dns-swift-storage-0\") pod \"e96343ec-94ad-405b-a4b6-5756ec03a482\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.035350 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-config\") pod \"e96343ec-94ad-405b-a4b6-5756ec03a482\" (UID: \"e96343ec-94ad-405b-a4b6-5756ec03a482\") " Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.068265 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e96343ec-94ad-405b-a4b6-5756ec03a482-kube-api-access-kg9dr" (OuterVolumeSpecName: "kube-api-access-kg9dr") pod "e96343ec-94ad-405b-a4b6-5756ec03a482" (UID: "e96343ec-94ad-405b-a4b6-5756ec03a482"). InnerVolumeSpecName "kube-api-access-kg9dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.109491 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e96343ec-94ad-405b-a4b6-5756ec03a482" (UID: "e96343ec-94ad-405b-a4b6-5756ec03a482"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.137438 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg9dr\" (UniqueName: \"kubernetes.io/projected/e96343ec-94ad-405b-a4b6-5756ec03a482-kube-api-access-kg9dr\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.137476 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.271154 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e96343ec-94ad-405b-a4b6-5756ec03a482" (UID: "e96343ec-94ad-405b-a4b6-5756ec03a482"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.299705 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e96343ec-94ad-405b-a4b6-5756ec03a482" (UID: "e96343ec-94ad-405b-a4b6-5756ec03a482"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.309212 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-config" (OuterVolumeSpecName: "config") pod "e96343ec-94ad-405b-a4b6-5756ec03a482" (UID: "e96343ec-94ad-405b-a4b6-5756ec03a482"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.343169 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.343203 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.343212 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.346767 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e96343ec-94ad-405b-a4b6-5756ec03a482" (UID: "e96343ec-94ad-405b-a4b6-5756ec03a482"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.445159 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e96343ec-94ad-405b-a4b6-5756ec03a482-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.544935 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf057e0-e090-498a-bb8b-32835373555c" path="/var/lib/kubelet/pods/4bf057e0-e090-498a-bb8b-32835373555c/volumes" Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.784083 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.906452 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.983854 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b7d955dc-c9svl" event={"ID":"e96343ec-94ad-405b-a4b6-5756ec03a482","Type":"ContainerDied","Data":"039ea6da60216c82ca6e79cd6908903f9472ce3ce461ebfa88341a51626e9d01"} Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.983938 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b7d955dc-c9svl" Oct 08 14:42:17 crc kubenswrapper[4624]: I1008 14:42:17.995745 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c09025d6-b523-4828-ba54-217da2a5bfb3","Type":"ContainerStarted","Data":"57788224cfc376e852cc6304d54fbbb2e770efd9c1e7f9ffeabf9113a74fa988"} Oct 08 14:42:18 crc kubenswrapper[4624]: I1008 14:42:18.009238 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d","Type":"ContainerStarted","Data":"9e26b50caf9ea88e07982961b71650fbb5827fae8d3867c0446fd7f3b2291b28"} Oct 08 14:42:18 crc kubenswrapper[4624]: I1008 14:42:18.012362 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"551042f0-9ae9-49b7-ab92-e1b775c7e742","Type":"ContainerStarted","Data":"2e00ae1f7df00362e2e5a2b0eb102635dc3c3ab72837429777876eb1bc74e929"} Oct 08 14:42:18 crc kubenswrapper[4624]: I1008 14:42:18.042391 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 08 14:42:18 crc kubenswrapper[4624]: I1008 14:42:18.075707 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b7d955dc-c9svl"] Oct 08 14:42:18 crc kubenswrapper[4624]: I1008 14:42:18.079958 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b7d955dc-c9svl"] Oct 08 14:42:18 crc kubenswrapper[4624]: I1008 14:42:18.458968 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 14:42:18 crc kubenswrapper[4624]: I1008 14:42:18.602431 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86d56d5f7b-2jwqz"] Oct 08 14:42:18 crc kubenswrapper[4624]: I1008 14:42:18.602668 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86d56d5f7b-2jwqz" podUID="a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76" containerName="neutron-api" containerID="cri-o://53706d2bf9975b256367a481ba8656ad7a17b93df91d9501be276ee69712fb95" gracePeriod=30 Oct 08 14:42:18 crc kubenswrapper[4624]: I1008 14:42:18.603092 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86d56d5f7b-2jwqz" podUID="a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76" containerName="neutron-httpd" containerID="cri-o://7c06635aefdafdc9274928ca3a51b5bc87729e28b5f2493d0376603b999ca4fb" gracePeriod=30 Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.024614 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"551042f0-9ae9-49b7-ab92-e1b775c7e742","Type":"ContainerStarted","Data":"e1c4508a574487d5b1c5fe0490ed9d056141b31f3a9ac9e87ea1f331de030a3f"} Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.030166 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" event={"ID":"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4","Type":"ContainerStarted","Data":"0f262eb10d189b691295cc4557a570209d66fc8288bcf014ffb360c98c24d295"} Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.031147 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.038118 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d","Type":"ContainerStarted","Data":"8e3a945f793264fbcd85317c350babfb08fe8206c9f06cef37a46ebd63f886df"} Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.038457 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" containerName="glance-log" containerID="cri-o://9e26b50caf9ea88e07982961b71650fbb5827fae8d3867c0446fd7f3b2291b28" gracePeriod=30 Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.038687 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" containerName="glance-httpd" containerID="cri-o://8e3a945f793264fbcd85317c350babfb08fe8206c9f06cef37a46ebd63f886df" gracePeriod=30 Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.060334 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" podStartSLOduration=7.060311539 podStartE2EDuration="7.060311539s" podCreationTimestamp="2025-10-08 14:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:42:19.057382195 +0000 UTC m=+1164.208317272" watchObservedRunningTime="2025-10-08 14:42:19.060311539 +0000 UTC m=+1164.211246616" Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.100386 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.10036691 podStartE2EDuration="8.10036691s" podCreationTimestamp="2025-10-08 14:42:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:42:19.098240547 +0000 UTC m=+1164.249175624" watchObservedRunningTime="2025-10-08 14:42:19.10036691 +0000 UTC m=+1164.251301977" Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.485332 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e96343ec-94ad-405b-a4b6-5756ec03a482" path="/var/lib/kubelet/pods/e96343ec-94ad-405b-a4b6-5756ec03a482/volumes" Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.802404 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.920271 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg89z\" (UniqueName: \"kubernetes.io/projected/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-kube-api-access-gg89z\") pod \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.933594 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-config-data\") pod \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.933847 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-httpd-run\") pod \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.933920 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-scripts\") pod \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.933986 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.934089 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-logs\") pod \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.934192 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-combined-ca-bundle\") pod \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.944841 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-kube-api-access-gg89z" (OuterVolumeSpecName: "kube-api-access-gg89z") pod "ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" (UID: "ecf4809a-cb57-4f14-aa27-f8e3f442ce3d"). InnerVolumeSpecName "kube-api-access-gg89z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.946426 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" (UID: "ecf4809a-cb57-4f14-aa27-f8e3f442ce3d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.952777 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-scripts" (OuterVolumeSpecName: "scripts") pod "ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" (UID: "ecf4809a-cb57-4f14-aa27-f8e3f442ce3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.953091 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-logs" (OuterVolumeSpecName: "logs") pod "ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" (UID: "ecf4809a-cb57-4f14-aa27-f8e3f442ce3d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:42:19 crc kubenswrapper[4624]: I1008 14:42:19.963370 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" (UID: "ecf4809a-cb57-4f14-aa27-f8e3f442ce3d"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.002017 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" (UID: "ecf4809a-cb57-4f14-aa27-f8e3f442ce3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.039151 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-config-data" (OuterVolumeSpecName: "config-data") pod "ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" (UID: "ecf4809a-cb57-4f14-aa27-f8e3f442ce3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.039434 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-config-data\") pod \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\" (UID: \"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d\") " Oct 08 14:42:20 crc kubenswrapper[4624]: W1008 14:42:20.039573 4624 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d/volumes/kubernetes.io~secret/config-data Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.039588 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-config-data" (OuterVolumeSpecName: "config-data") pod "ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" (UID: "ecf4809a-cb57-4f14-aa27-f8e3f442ce3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.040087 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg89z\" (UniqueName: \"kubernetes.io/projected/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-kube-api-access-gg89z\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.040109 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.040120 4624 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.040130 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.040162 4624 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.040174 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-logs\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.040184 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.075741 4624 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.084999 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c09025d6-b523-4828-ba54-217da2a5bfb3","Type":"ContainerStarted","Data":"7ece144ab0d4935be29267e22c484c9b28ee1f11f24c3af44bf004228fea9832"} Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.103108 4624 generic.go:334] "Generic (PLEG): container finished" podID="ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" containerID="8e3a945f793264fbcd85317c350babfb08fe8206c9f06cef37a46ebd63f886df" exitCode=143 Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.103172 4624 generic.go:334] "Generic (PLEG): container finished" podID="ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" containerID="9e26b50caf9ea88e07982961b71650fbb5827fae8d3867c0446fd7f3b2291b28" exitCode=143 Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.103247 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d","Type":"ContainerDied","Data":"8e3a945f793264fbcd85317c350babfb08fe8206c9f06cef37a46ebd63f886df"} Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.103275 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d","Type":"ContainerDied","Data":"9e26b50caf9ea88e07982961b71650fbb5827fae8d3867c0446fd7f3b2291b28"} Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.103300 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ecf4809a-cb57-4f14-aa27-f8e3f442ce3d","Type":"ContainerDied","Data":"b8bfcd12f7cf69a412ac99e894f064079bb89185934be51167413a380e4e1882"} Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.103318 4624 scope.go:117] "RemoveContainer" containerID="8e3a945f793264fbcd85317c350babfb08fe8206c9f06cef37a46ebd63f886df" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.103495 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.126327 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d","Type":"ContainerStarted","Data":"1698c7377fa9ccb4f05e7e2864f195fac7bcc5ba7a110668653085a4c25ffb5b"} Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.126519 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a809cee7-9bc9-4fd6-a20d-3e1561a1df5d" containerName="cinder-api-log" containerID="cri-o://6d378d32eaebd51119d466c0eeabcd4c13202c9c9bf1f8463d828cc2c701e0db" gracePeriod=30 Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.126584 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a809cee7-9bc9-4fd6-a20d-3e1561a1df5d" containerName="cinder-api" containerID="cri-o://1698c7377fa9ccb4f05e7e2864f195fac7bcc5ba7a110668653085a4c25ffb5b" gracePeriod=30 Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.126691 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.143959 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c18ac300-a93d-48e2-84fa-9cc77d32e296","Type":"ContainerStarted","Data":"88974608d65121cb8aa49351add086d39b082281b03bf51e5fba5ce8f32d23d0"} Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.144841 4624 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.150199 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=8.527459206 podStartE2EDuration="9.150173363s" podCreationTimestamp="2025-10-08 14:42:11 +0000 UTC" firstStartedPulling="2025-10-08 14:42:13.250065425 +0000 UTC m=+1158.401000502" lastFinishedPulling="2025-10-08 14:42:13.872779582 +0000 UTC m=+1159.023714659" observedRunningTime="2025-10-08 14:42:20.112226504 +0000 UTC m=+1165.263161581" watchObservedRunningTime="2025-10-08 14:42:20.150173363 +0000 UTC m=+1165.301108440" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.174389 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"551042f0-9ae9-49b7-ab92-e1b775c7e742","Type":"ContainerStarted","Data":"790808c62c0ac26e00901261625803bad40fd27dbf3fe9381129a5ef33c87813"} Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.189389 4624 scope.go:117] "RemoveContainer" containerID="9e26b50caf9ea88e07982961b71650fbb5827fae8d3867c0446fd7f3b2291b28" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.196706 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.221122 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.240334 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 14:42:20 crc kubenswrapper[4624]: E1008 14:42:20.240901 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" containerName="glance-log" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.240925 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" containerName="glance-log" Oct 08 14:42:20 crc kubenswrapper[4624]: E1008 14:42:20.240949 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e96343ec-94ad-405b-a4b6-5756ec03a482" containerName="init" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.240958 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e96343ec-94ad-405b-a4b6-5756ec03a482" containerName="init" Oct 08 14:42:20 crc kubenswrapper[4624]: E1008 14:42:20.240991 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" containerName="glance-httpd" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.241000 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" containerName="glance-httpd" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.241230 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e96343ec-94ad-405b-a4b6-5756ec03a482" containerName="init" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.241247 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" containerName="glance-httpd" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.241262 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" containerName="glance-log" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.242408 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.246163 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.246427 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.305806 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.326302 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.32627632 podStartE2EDuration="8.32627632s" podCreationTimestamp="2025-10-08 14:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:42:20.266811318 +0000 UTC m=+1165.417746395" watchObservedRunningTime="2025-10-08 14:42:20.32627632 +0000 UTC m=+1165.477211397" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.349596 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.349711 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-config-data\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.349783 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5a01279-9918-4adc-a38f-f102fd394ca0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.349802 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5a01279-9918-4adc-a38f-f102fd394ca0-logs\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.349838 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.349866 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.349893 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bs8w\" (UniqueName: \"kubernetes.io/projected/d5a01279-9918-4adc-a38f-f102fd394ca0-kube-api-access-6bs8w\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.349927 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-scripts\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.405885 4624 scope.go:117] "RemoveContainer" containerID="8e3a945f793264fbcd85317c350babfb08fe8206c9f06cef37a46ebd63f886df" Oct 08 14:42:20 crc kubenswrapper[4624]: E1008 14:42:20.406862 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e3a945f793264fbcd85317c350babfb08fe8206c9f06cef37a46ebd63f886df\": container with ID starting with 8e3a945f793264fbcd85317c350babfb08fe8206c9f06cef37a46ebd63f886df not found: ID does not exist" containerID="8e3a945f793264fbcd85317c350babfb08fe8206c9f06cef37a46ebd63f886df" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.406915 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e3a945f793264fbcd85317c350babfb08fe8206c9f06cef37a46ebd63f886df"} err="failed to get container status \"8e3a945f793264fbcd85317c350babfb08fe8206c9f06cef37a46ebd63f886df\": rpc error: code = NotFound desc = could not find container \"8e3a945f793264fbcd85317c350babfb08fe8206c9f06cef37a46ebd63f886df\": container with ID starting with 8e3a945f793264fbcd85317c350babfb08fe8206c9f06cef37a46ebd63f886df not found: ID does not exist" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.406945 4624 scope.go:117] "RemoveContainer" containerID="9e26b50caf9ea88e07982961b71650fbb5827fae8d3867c0446fd7f3b2291b28" Oct 08 14:42:20 crc kubenswrapper[4624]: E1008 14:42:20.409911 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e26b50caf9ea88e07982961b71650fbb5827fae8d3867c0446fd7f3b2291b28\": container with ID starting with 9e26b50caf9ea88e07982961b71650fbb5827fae8d3867c0446fd7f3b2291b28 not found: ID does not exist" containerID="9e26b50caf9ea88e07982961b71650fbb5827fae8d3867c0446fd7f3b2291b28" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.409952 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e26b50caf9ea88e07982961b71650fbb5827fae8d3867c0446fd7f3b2291b28"} err="failed to get container status \"9e26b50caf9ea88e07982961b71650fbb5827fae8d3867c0446fd7f3b2291b28\": rpc error: code = NotFound desc = could not find container \"9e26b50caf9ea88e07982961b71650fbb5827fae8d3867c0446fd7f3b2291b28\": container with ID starting with 9e26b50caf9ea88e07982961b71650fbb5827fae8d3867c0446fd7f3b2291b28 not found: ID does not exist" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.409976 4624 scope.go:117] "RemoveContainer" containerID="8e3a945f793264fbcd85317c350babfb08fe8206c9f06cef37a46ebd63f886df" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.414099 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e3a945f793264fbcd85317c350babfb08fe8206c9f06cef37a46ebd63f886df"} err="failed to get container status \"8e3a945f793264fbcd85317c350babfb08fe8206c9f06cef37a46ebd63f886df\": rpc error: code = NotFound desc = could not find container \"8e3a945f793264fbcd85317c350babfb08fe8206c9f06cef37a46ebd63f886df\": container with ID starting with 8e3a945f793264fbcd85317c350babfb08fe8206c9f06cef37a46ebd63f886df not found: ID does not exist" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.414137 4624 scope.go:117] "RemoveContainer" containerID="9e26b50caf9ea88e07982961b71650fbb5827fae8d3867c0446fd7f3b2291b28" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.416514 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e26b50caf9ea88e07982961b71650fbb5827fae8d3867c0446fd7f3b2291b28"} err="failed to get container status \"9e26b50caf9ea88e07982961b71650fbb5827fae8d3867c0446fd7f3b2291b28\": rpc error: code = NotFound desc = could not find container \"9e26b50caf9ea88e07982961b71650fbb5827fae8d3867c0446fd7f3b2291b28\": container with ID starting with 9e26b50caf9ea88e07982961b71650fbb5827fae8d3867c0446fd7f3b2291b28 not found: ID does not exist" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.453787 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bs8w\" (UniqueName: \"kubernetes.io/projected/d5a01279-9918-4adc-a38f-f102fd394ca0-kube-api-access-6bs8w\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.453868 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-scripts\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.453928 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.454021 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-config-data\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.454102 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5a01279-9918-4adc-a38f-f102fd394ca0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.454129 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5a01279-9918-4adc-a38f-f102fd394ca0-logs\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.454173 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.454201 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.459459 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5a01279-9918-4adc-a38f-f102fd394ca0-logs\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.459725 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5a01279-9918-4adc-a38f-f102fd394ca0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.459781 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.471044 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-config-data\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.478219 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-scripts\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.479737 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.479951 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bs8w\" (UniqueName: \"kubernetes.io/projected/d5a01279-9918-4adc-a38f-f102fd394ca0-kube-api-access-6bs8w\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.488515 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.507539 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " pod="openstack/glance-default-external-api-0" Oct 08 14:42:20 crc kubenswrapper[4624]: I1008 14:42:20.679088 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 14:42:21 crc kubenswrapper[4624]: I1008 14:42:21.180443 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"551042f0-9ae9-49b7-ab92-e1b775c7e742","Type":"ContainerStarted","Data":"56a00c1abe8cde54f32d7df775489195aaf76bd6db06b4dcf0aadeaf89658409"} Oct 08 14:42:21 crc kubenswrapper[4624]: I1008 14:42:21.182202 4624 generic.go:334] "Generic (PLEG): container finished" podID="a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76" containerID="7c06635aefdafdc9274928ca3a51b5bc87729e28b5f2493d0376603b999ca4fb" exitCode=0 Oct 08 14:42:21 crc kubenswrapper[4624]: I1008 14:42:21.182259 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86d56d5f7b-2jwqz" event={"ID":"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76","Type":"ContainerDied","Data":"7c06635aefdafdc9274928ca3a51b5bc87729e28b5f2493d0376603b999ca4fb"} Oct 08 14:42:21 crc kubenswrapper[4624]: I1008 14:42:21.184659 4624 generic.go:334] "Generic (PLEG): container finished" podID="a809cee7-9bc9-4fd6-a20d-3e1561a1df5d" containerID="6d378d32eaebd51119d466c0eeabcd4c13202c9c9bf1f8463d828cc2c701e0db" exitCode=143 Oct 08 14:42:21 crc kubenswrapper[4624]: I1008 14:42:21.184708 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d","Type":"ContainerDied","Data":"6d378d32eaebd51119d466c0eeabcd4c13202c9c9bf1f8463d828cc2c701e0db"} Oct 08 14:42:21 crc kubenswrapper[4624]: I1008 14:42:21.186178 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c18ac300-a93d-48e2-84fa-9cc77d32e296","Type":"ContainerStarted","Data":"8d96f572e488560e206e5e3c56c3610c26306796b3466d50de08d0db2f136ad2"} Oct 08 14:42:21 crc kubenswrapper[4624]: I1008 14:42:21.186556 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c18ac300-a93d-48e2-84fa-9cc77d32e296" containerName="glance-log" containerID="cri-o://88974608d65121cb8aa49351add086d39b082281b03bf51e5fba5ce8f32d23d0" gracePeriod=30 Oct 08 14:42:21 crc kubenswrapper[4624]: I1008 14:42:21.186620 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c18ac300-a93d-48e2-84fa-9cc77d32e296" containerName="glance-httpd" containerID="cri-o://8d96f572e488560e206e5e3c56c3610c26306796b3466d50de08d0db2f136ad2" gracePeriod=30 Oct 08 14:42:21 crc kubenswrapper[4624]: I1008 14:42:21.214279 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.214258705 podStartE2EDuration="9.214258705s" podCreationTimestamp="2025-10-08 14:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:42:21.208095299 +0000 UTC m=+1166.359030376" watchObservedRunningTime="2025-10-08 14:42:21.214258705 +0000 UTC m=+1166.365193782" Oct 08 14:42:21 crc kubenswrapper[4624]: I1008 14:42:21.446894 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 14:42:21 crc kubenswrapper[4624]: W1008 14:42:21.490755 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5a01279_9918_4adc_a38f_f102fd394ca0.slice/crio-46375bf0864095c9a47fd6fde417fe0dd91ea7551d39bb53571de528bf053830 WatchSource:0}: Error finding container 46375bf0864095c9a47fd6fde417fe0dd91ea7551d39bb53571de528bf053830: Status 404 returned error can't find the container with id 46375bf0864095c9a47fd6fde417fe0dd91ea7551d39bb53571de528bf053830 Oct 08 14:42:21 crc kubenswrapper[4624]: I1008 14:42:21.493157 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecf4809a-cb57-4f14-aa27-f8e3f442ce3d" path="/var/lib/kubelet/pods/ecf4809a-cb57-4f14-aa27-f8e3f442ce3d/volumes" Oct 08 14:42:21 crc kubenswrapper[4624]: I1008 14:42:21.693404 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:42:21 crc kubenswrapper[4624]: I1008 14:42:21.693816 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f4595c6f6-t4nns" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.036946 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.039596 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c09025d6-b523-4828-ba54-217da2a5bfb3" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.162:8080/\": dial tcp 10.217.0.162:8080: connect: connection refused" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.103268 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.212244 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-etc-machine-id\") pod \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.212322 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-combined-ca-bundle\") pod \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.212500 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jnn4\" (UniqueName: \"kubernetes.io/projected/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-kube-api-access-6jnn4\") pod \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.212555 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-config-data-custom\") pod \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.212578 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-scripts\") pod \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.212682 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-logs\") pod \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.212736 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-config-data\") pod \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\" (UID: \"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d\") " Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.222576 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-logs" (OuterVolumeSpecName: "logs") pod "a809cee7-9bc9-4fd6-a20d-3e1561a1df5d" (UID: "a809cee7-9bc9-4fd6-a20d-3e1561a1df5d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.222660 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a809cee7-9bc9-4fd6-a20d-3e1561a1df5d" (UID: "a809cee7-9bc9-4fd6-a20d-3e1561a1df5d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.234651 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.242879 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a809cee7-9bc9-4fd6-a20d-3e1561a1df5d" (UID: "a809cee7-9bc9-4fd6-a20d-3e1561a1df5d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.261787 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-scripts" (OuterVolumeSpecName: "scripts") pod "a809cee7-9bc9-4fd6-a20d-3e1561a1df5d" (UID: "a809cee7-9bc9-4fd6-a20d-3e1561a1df5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.262888 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-kube-api-access-6jnn4" (OuterVolumeSpecName: "kube-api-access-6jnn4") pod "a809cee7-9bc9-4fd6-a20d-3e1561a1df5d" (UID: "a809cee7-9bc9-4fd6-a20d-3e1561a1df5d"). InnerVolumeSpecName "kube-api-access-6jnn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.329503 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c18ac300-a93d-48e2-84fa-9cc77d32e296-config-data\") pod \"c18ac300-a93d-48e2-84fa-9cc77d32e296\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.329572 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c18ac300-a93d-48e2-84fa-9cc77d32e296-combined-ca-bundle\") pod \"c18ac300-a93d-48e2-84fa-9cc77d32e296\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.329603 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"c18ac300-a93d-48e2-84fa-9cc77d32e296\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.330477 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c18ac300-a93d-48e2-84fa-9cc77d32e296-logs\") pod \"c18ac300-a93d-48e2-84fa-9cc77d32e296\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.330549 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c18ac300-a93d-48e2-84fa-9cc77d32e296-httpd-run\") pod \"c18ac300-a93d-48e2-84fa-9cc77d32e296\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.330581 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqqpb\" (UniqueName: \"kubernetes.io/projected/c18ac300-a93d-48e2-84fa-9cc77d32e296-kube-api-access-cqqpb\") pod \"c18ac300-a93d-48e2-84fa-9cc77d32e296\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.330659 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c18ac300-a93d-48e2-84fa-9cc77d32e296-scripts\") pod \"c18ac300-a93d-48e2-84fa-9cc77d32e296\" (UID: \"c18ac300-a93d-48e2-84fa-9cc77d32e296\") " Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.331124 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jnn4\" (UniqueName: \"kubernetes.io/projected/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-kube-api-access-6jnn4\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.331144 4624 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.331155 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.331166 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-logs\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.331175 4624 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.335859 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5a01279-9918-4adc-a38f-f102fd394ca0","Type":"ContainerStarted","Data":"46375bf0864095c9a47fd6fde417fe0dd91ea7551d39bb53571de528bf053830"} Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.342946 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c18ac300-a93d-48e2-84fa-9cc77d32e296-logs" (OuterVolumeSpecName: "logs") pod "c18ac300-a93d-48e2-84fa-9cc77d32e296" (UID: "c18ac300-a93d-48e2-84fa-9cc77d32e296"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.343902 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c18ac300-a93d-48e2-84fa-9cc77d32e296-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c18ac300-a93d-48e2-84fa-9cc77d32e296" (UID: "c18ac300-a93d-48e2-84fa-9cc77d32e296"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.346841 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18ac300-a93d-48e2-84fa-9cc77d32e296-scripts" (OuterVolumeSpecName: "scripts") pod "c18ac300-a93d-48e2-84fa-9cc77d32e296" (UID: "c18ac300-a93d-48e2-84fa-9cc77d32e296"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.371832 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "c18ac300-a93d-48e2-84fa-9cc77d32e296" (UID: "c18ac300-a93d-48e2-84fa-9cc77d32e296"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.411062 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c18ac300-a93d-48e2-84fa-9cc77d32e296-kube-api-access-cqqpb" (OuterVolumeSpecName: "kube-api-access-cqqpb") pod "c18ac300-a93d-48e2-84fa-9cc77d32e296" (UID: "c18ac300-a93d-48e2-84fa-9cc77d32e296"). InnerVolumeSpecName "kube-api-access-cqqpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.433849 4624 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.433897 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c18ac300-a93d-48e2-84fa-9cc77d32e296-logs\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.433910 4624 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c18ac300-a93d-48e2-84fa-9cc77d32e296-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.433922 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqqpb\" (UniqueName: \"kubernetes.io/projected/c18ac300-a93d-48e2-84fa-9cc77d32e296-kube-api-access-cqqpb\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.433936 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c18ac300-a93d-48e2-84fa-9cc77d32e296-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.456598 4624 generic.go:334] "Generic (PLEG): container finished" podID="a809cee7-9bc9-4fd6-a20d-3e1561a1df5d" containerID="1698c7377fa9ccb4f05e7e2864f195fac7bcc5ba7a110668653085a4c25ffb5b" exitCode=0 Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.456701 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d","Type":"ContainerDied","Data":"1698c7377fa9ccb4f05e7e2864f195fac7bcc5ba7a110668653085a4c25ffb5b"} Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.456736 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a809cee7-9bc9-4fd6-a20d-3e1561a1df5d","Type":"ContainerDied","Data":"c1745799f332517908777217327053eb50e3ef734c6254f3fcff707cd709da47"} Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.456753 4624 scope.go:117] "RemoveContainer" containerID="1698c7377fa9ccb4f05e7e2864f195fac7bcc5ba7a110668653085a4c25ffb5b" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.456891 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.479832 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a809cee7-9bc9-4fd6-a20d-3e1561a1df5d" (UID: "a809cee7-9bc9-4fd6-a20d-3e1561a1df5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.508127 4624 generic.go:334] "Generic (PLEG): container finished" podID="c18ac300-a93d-48e2-84fa-9cc77d32e296" containerID="8d96f572e488560e206e5e3c56c3610c26306796b3466d50de08d0db2f136ad2" exitCode=143 Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.508158 4624 generic.go:334] "Generic (PLEG): container finished" podID="c18ac300-a93d-48e2-84fa-9cc77d32e296" containerID="88974608d65121cb8aa49351add086d39b082281b03bf51e5fba5ce8f32d23d0" exitCode=143 Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.509072 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.509517 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c18ac300-a93d-48e2-84fa-9cc77d32e296","Type":"ContainerDied","Data":"8d96f572e488560e206e5e3c56c3610c26306796b3466d50de08d0db2f136ad2"} Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.509552 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c18ac300-a93d-48e2-84fa-9cc77d32e296","Type":"ContainerDied","Data":"88974608d65121cb8aa49351add086d39b082281b03bf51e5fba5ce8f32d23d0"} Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.509567 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c18ac300-a93d-48e2-84fa-9cc77d32e296","Type":"ContainerDied","Data":"9ed91ed067d36eed67af0786025319e47a86728ff0b3f1545d1a9dcf4d1f568c"} Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.546541 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.555767 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-config-data" (OuterVolumeSpecName: "config-data") pod "a809cee7-9bc9-4fd6-a20d-3e1561a1df5d" (UID: "a809cee7-9bc9-4fd6-a20d-3e1561a1df5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.589919 4624 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.598203 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18ac300-a93d-48e2-84fa-9cc77d32e296-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c18ac300-a93d-48e2-84fa-9cc77d32e296" (UID: "c18ac300-a93d-48e2-84fa-9cc77d32e296"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.637151 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18ac300-a93d-48e2-84fa-9cc77d32e296-config-data" (OuterVolumeSpecName: "config-data") pod "c18ac300-a93d-48e2-84fa-9cc77d32e296" (UID: "c18ac300-a93d-48e2-84fa-9cc77d32e296"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.648370 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.648399 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c18ac300-a93d-48e2-84fa-9cc77d32e296-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.648407 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c18ac300-a93d-48e2-84fa-9cc77d32e296-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.648418 4624 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.658038 4624 scope.go:117] "RemoveContainer" containerID="6d378d32eaebd51119d466c0eeabcd4c13202c9c9bf1f8463d828cc2c701e0db" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.761811 4624 scope.go:117] "RemoveContainer" containerID="1698c7377fa9ccb4f05e7e2864f195fac7bcc5ba7a110668653085a4c25ffb5b" Oct 08 14:42:22 crc kubenswrapper[4624]: E1008 14:42:22.763798 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1698c7377fa9ccb4f05e7e2864f195fac7bcc5ba7a110668653085a4c25ffb5b\": container with ID starting with 1698c7377fa9ccb4f05e7e2864f195fac7bcc5ba7a110668653085a4c25ffb5b not found: ID does not exist" containerID="1698c7377fa9ccb4f05e7e2864f195fac7bcc5ba7a110668653085a4c25ffb5b" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.763835 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1698c7377fa9ccb4f05e7e2864f195fac7bcc5ba7a110668653085a4c25ffb5b"} err="failed to get container status \"1698c7377fa9ccb4f05e7e2864f195fac7bcc5ba7a110668653085a4c25ffb5b\": rpc error: code = NotFound desc = could not find container \"1698c7377fa9ccb4f05e7e2864f195fac7bcc5ba7a110668653085a4c25ffb5b\": container with ID starting with 1698c7377fa9ccb4f05e7e2864f195fac7bcc5ba7a110668653085a4c25ffb5b not found: ID does not exist" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.763858 4624 scope.go:117] "RemoveContainer" containerID="6d378d32eaebd51119d466c0eeabcd4c13202c9c9bf1f8463d828cc2c701e0db" Oct 08 14:42:22 crc kubenswrapper[4624]: E1008 14:42:22.765241 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d378d32eaebd51119d466c0eeabcd4c13202c9c9bf1f8463d828cc2c701e0db\": container with ID starting with 6d378d32eaebd51119d466c0eeabcd4c13202c9c9bf1f8463d828cc2c701e0db not found: ID does not exist" containerID="6d378d32eaebd51119d466c0eeabcd4c13202c9c9bf1f8463d828cc2c701e0db" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.765261 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d378d32eaebd51119d466c0eeabcd4c13202c9c9bf1f8463d828cc2c701e0db"} err="failed to get container status \"6d378d32eaebd51119d466c0eeabcd4c13202c9c9bf1f8463d828cc2c701e0db\": rpc error: code = NotFound desc = could not find container \"6d378d32eaebd51119d466c0eeabcd4c13202c9c9bf1f8463d828cc2c701e0db\": container with ID starting with 6d378d32eaebd51119d466c0eeabcd4c13202c9c9bf1f8463d828cc2c701e0db not found: ID does not exist" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.765285 4624 scope.go:117] "RemoveContainer" containerID="8d96f572e488560e206e5e3c56c3610c26306796b3466d50de08d0db2f136ad2" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.832795 4624 scope.go:117] "RemoveContainer" containerID="88974608d65121cb8aa49351add086d39b082281b03bf51e5fba5ce8f32d23d0" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.868678 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.910285 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.920514 4624 scope.go:117] "RemoveContainer" containerID="8d96f572e488560e206e5e3c56c3610c26306796b3466d50de08d0db2f136ad2" Oct 08 14:42:22 crc kubenswrapper[4624]: E1008 14:42:22.921311 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d96f572e488560e206e5e3c56c3610c26306796b3466d50de08d0db2f136ad2\": container with ID starting with 8d96f572e488560e206e5e3c56c3610c26306796b3466d50de08d0db2f136ad2 not found: ID does not exist" containerID="8d96f572e488560e206e5e3c56c3610c26306796b3466d50de08d0db2f136ad2" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.921353 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d96f572e488560e206e5e3c56c3610c26306796b3466d50de08d0db2f136ad2"} err="failed to get container status \"8d96f572e488560e206e5e3c56c3610c26306796b3466d50de08d0db2f136ad2\": rpc error: code = NotFound desc = could not find container \"8d96f572e488560e206e5e3c56c3610c26306796b3466d50de08d0db2f136ad2\": container with ID starting with 8d96f572e488560e206e5e3c56c3610c26306796b3466d50de08d0db2f136ad2 not found: ID does not exist" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.921379 4624 scope.go:117] "RemoveContainer" containerID="88974608d65121cb8aa49351add086d39b082281b03bf51e5fba5ce8f32d23d0" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.932517 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 14:42:22 crc kubenswrapper[4624]: E1008 14:42:22.935201 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88974608d65121cb8aa49351add086d39b082281b03bf51e5fba5ce8f32d23d0\": container with ID starting with 88974608d65121cb8aa49351add086d39b082281b03bf51e5fba5ce8f32d23d0 not found: ID does not exist" containerID="88974608d65121cb8aa49351add086d39b082281b03bf51e5fba5ce8f32d23d0" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.935253 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88974608d65121cb8aa49351add086d39b082281b03bf51e5fba5ce8f32d23d0"} err="failed to get container status \"88974608d65121cb8aa49351add086d39b082281b03bf51e5fba5ce8f32d23d0\": rpc error: code = NotFound desc = could not find container \"88974608d65121cb8aa49351add086d39b082281b03bf51e5fba5ce8f32d23d0\": container with ID starting with 88974608d65121cb8aa49351add086d39b082281b03bf51e5fba5ce8f32d23d0 not found: ID does not exist" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.935285 4624 scope.go:117] "RemoveContainer" containerID="8d96f572e488560e206e5e3c56c3610c26306796b3466d50de08d0db2f136ad2" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.936605 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d96f572e488560e206e5e3c56c3610c26306796b3466d50de08d0db2f136ad2"} err="failed to get container status \"8d96f572e488560e206e5e3c56c3610c26306796b3466d50de08d0db2f136ad2\": rpc error: code = NotFound desc = could not find container \"8d96f572e488560e206e5e3c56c3610c26306796b3466d50de08d0db2f136ad2\": container with ID starting with 8d96f572e488560e206e5e3c56c3610c26306796b3466d50de08d0db2f136ad2 not found: ID does not exist" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.936651 4624 scope.go:117] "RemoveContainer" containerID="88974608d65121cb8aa49351add086d39b082281b03bf51e5fba5ce8f32d23d0" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.941815 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88974608d65121cb8aa49351add086d39b082281b03bf51e5fba5ce8f32d23d0"} err="failed to get container status \"88974608d65121cb8aa49351add086d39b082281b03bf51e5fba5ce8f32d23d0\": rpc error: code = NotFound desc = could not find container \"88974608d65121cb8aa49351add086d39b082281b03bf51e5fba5ce8f32d23d0\": container with ID starting with 88974608d65121cb8aa49351add086d39b082281b03bf51e5fba5ce8f32d23d0 not found: ID does not exist" Oct 08 14:42:22 crc kubenswrapper[4624]: I1008 14:42:22.970937 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.004721 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 08 14:42:23 crc kubenswrapper[4624]: E1008 14:42:23.005298 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a809cee7-9bc9-4fd6-a20d-3e1561a1df5d" containerName="cinder-api" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.005374 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a809cee7-9bc9-4fd6-a20d-3e1561a1df5d" containerName="cinder-api" Oct 08 14:42:23 crc kubenswrapper[4624]: E1008 14:42:23.005456 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c18ac300-a93d-48e2-84fa-9cc77d32e296" containerName="glance-log" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.005517 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c18ac300-a93d-48e2-84fa-9cc77d32e296" containerName="glance-log" Oct 08 14:42:23 crc kubenswrapper[4624]: E1008 14:42:23.005610 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c18ac300-a93d-48e2-84fa-9cc77d32e296" containerName="glance-httpd" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.005709 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c18ac300-a93d-48e2-84fa-9cc77d32e296" containerName="glance-httpd" Oct 08 14:42:23 crc kubenswrapper[4624]: E1008 14:42:23.005783 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a809cee7-9bc9-4fd6-a20d-3e1561a1df5d" containerName="cinder-api-log" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.005834 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a809cee7-9bc9-4fd6-a20d-3e1561a1df5d" containerName="cinder-api-log" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.006092 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a809cee7-9bc9-4fd6-a20d-3e1561a1df5d" containerName="cinder-api-log" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.006180 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c18ac300-a93d-48e2-84fa-9cc77d32e296" containerName="glance-httpd" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.006279 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a809cee7-9bc9-4fd6-a20d-3e1561a1df5d" containerName="cinder-api" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.006382 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c18ac300-a93d-48e2-84fa-9cc77d32e296" containerName="glance-log" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.020670 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.029221 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.029560 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.030017 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.055511 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.057306 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.062114 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.062296 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.084385 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.106831 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 14:42:23 crc kubenswrapper[4624]: E1008 14:42:23.143687 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc18ac300_a93d_48e2_84fa_9cc77d32e296.slice/crio-9ed91ed067d36eed67af0786025319e47a86728ff0b3f1545d1a9dcf4d1f568c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda809cee7_9bc9_4fd6_a20d_3e1561a1df5d.slice/crio-c1745799f332517908777217327053eb50e3ef734c6254f3fcff707cd709da47\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda809cee7_9bc9_4fd6_a20d_3e1561a1df5d.slice\": RecentStats: unable to find data in memory cache]" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.160441 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.160496 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.160537 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d6f1635-4c52-4761-a2c7-38951659c26e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.160574 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmzdn\" (UniqueName: \"kubernetes.io/projected/5d6f1635-4c52-4761-a2c7-38951659c26e-kube-api-access-qmzdn\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.160678 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d6f1635-4c52-4761-a2c7-38951659c26e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.160711 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d6f1635-4c52-4761-a2c7-38951659c26e-config-data\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.160731 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.160760 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d6f1635-4c52-4761-a2c7-38951659c26e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.160800 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhfvf\" (UniqueName: \"kubernetes.io/projected/92d52b0f-221f-4cc7-9157-3fef68ed6db6-kube-api-access-xhfvf\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.160829 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/92d52b0f-221f-4cc7-9157-3fef68ed6db6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.160854 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.160875 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d6f1635-4c52-4761-a2c7-38951659c26e-config-data-custom\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.160908 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d6f1635-4c52-4761-a2c7-38951659c26e-scripts\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.160941 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.160967 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d6f1635-4c52-4761-a2c7-38951659c26e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.160997 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d6f1635-4c52-4761-a2c7-38951659c26e-logs\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.161035 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d52b0f-221f-4cc7-9157-3fef68ed6db6-logs\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.263282 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d6f1635-4c52-4761-a2c7-38951659c26e-logs\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.263698 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d52b0f-221f-4cc7-9157-3fef68ed6db6-logs\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.263735 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.263759 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.263824 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d6f1635-4c52-4761-a2c7-38951659c26e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.263863 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmzdn\" (UniqueName: \"kubernetes.io/projected/5d6f1635-4c52-4761-a2c7-38951659c26e-kube-api-access-qmzdn\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.263961 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d6f1635-4c52-4761-a2c7-38951659c26e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.263995 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d6f1635-4c52-4761-a2c7-38951659c26e-config-data\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.264018 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.264051 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d6f1635-4c52-4761-a2c7-38951659c26e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.264085 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhfvf\" (UniqueName: \"kubernetes.io/projected/92d52b0f-221f-4cc7-9157-3fef68ed6db6-kube-api-access-xhfvf\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.264121 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/92d52b0f-221f-4cc7-9157-3fef68ed6db6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.264148 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.264170 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d6f1635-4c52-4761-a2c7-38951659c26e-config-data-custom\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.264204 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d6f1635-4c52-4761-a2c7-38951659c26e-scripts\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.264241 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.264270 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d6f1635-4c52-4761-a2c7-38951659c26e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.266961 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/92d52b0f-221f-4cc7-9157-3fef68ed6db6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.286872 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d6f1635-4c52-4761-a2c7-38951659c26e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.287201 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d52b0f-221f-4cc7-9157-3fef68ed6db6-logs\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.287387 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.287436 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d6f1635-4c52-4761-a2c7-38951659c26e-logs\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.295921 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.297733 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d6f1635-4c52-4761-a2c7-38951659c26e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.303483 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.304107 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d6f1635-4c52-4761-a2c7-38951659c26e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.304398 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d6f1635-4c52-4761-a2c7-38951659c26e-config-data-custom\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.304789 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d6f1635-4c52-4761-a2c7-38951659c26e-config-data\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.316278 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d6f1635-4c52-4761-a2c7-38951659c26e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.316500 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d6f1635-4c52-4761-a2c7-38951659c26e-scripts\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.317176 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.317832 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmzdn\" (UniqueName: \"kubernetes.io/projected/5d6f1635-4c52-4761-a2c7-38951659c26e-kube-api-access-qmzdn\") pod \"cinder-api-0\" (UID: \"5d6f1635-4c52-4761-a2c7-38951659c26e\") " pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.318348 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.359536 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhfvf\" (UniqueName: \"kubernetes.io/projected/92d52b0f-221f-4cc7-9157-3fef68ed6db6-kube-api-access-xhfvf\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.405984 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.455897 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.533834 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a809cee7-9bc9-4fd6-a20d-3e1561a1df5d" path="/var/lib/kubelet/pods/a809cee7-9bc9-4fd6-a20d-3e1561a1df5d/volumes" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.538070 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c18ac300-a93d-48e2-84fa-9cc77d32e296" path="/var/lib/kubelet/pods/c18ac300-a93d-48e2-84fa-9cc77d32e296/volumes" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.623492 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"551042f0-9ae9-49b7-ab92-e1b775c7e742","Type":"ContainerStarted","Data":"9112550b393920691cb1de6b367b640cc325b5905f197e819aa4d244c7d15d3f"} Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.623855 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.645230 4624 generic.go:334] "Generic (PLEG): container finished" podID="a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76" containerID="53706d2bf9975b256367a481ba8656ad7a17b93df91d9501be276ee69712fb95" exitCode=0 Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.645328 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86d56d5f7b-2jwqz" event={"ID":"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76","Type":"ContainerDied","Data":"53706d2bf9975b256367a481ba8656ad7a17b93df91d9501be276ee69712fb95"} Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.647970 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5a01279-9918-4adc-a38f-f102fd394ca0","Type":"ContainerStarted","Data":"053ad65b4c100768721f861f1f190ecf6fd2234ab5c4cde6e3c43cf11f4ba1f2"} Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.663565 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.957209721 podStartE2EDuration="8.66354772s" podCreationTimestamp="2025-10-08 14:42:15 +0000 UTC" firstStartedPulling="2025-10-08 14:42:17.071307678 +0000 UTC m=+1162.222242755" lastFinishedPulling="2025-10-08 14:42:22.777645677 +0000 UTC m=+1167.928580754" observedRunningTime="2025-10-08 14:42:23.660456992 +0000 UTC m=+1168.811392069" watchObservedRunningTime="2025-10-08 14:42:23.66354772 +0000 UTC m=+1168.814482797" Oct 08 14:42:23 crc kubenswrapper[4624]: I1008 14:42:23.731662 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.272737 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.512429 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.640656 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-httpd-config\") pod \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.640994 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-config\") pod \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.641080 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6275g\" (UniqueName: \"kubernetes.io/projected/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-kube-api-access-6275g\") pod \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.641106 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-ovndb-tls-certs\") pod \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.641131 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-combined-ca-bundle\") pod \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\" (UID: \"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76\") " Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.663710 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-kube-api-access-6275g" (OuterVolumeSpecName: "kube-api-access-6275g") pod "a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76" (UID: "a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76"). InnerVolumeSpecName "kube-api-access-6275g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.681004 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76" (UID: "a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.716045 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86d56d5f7b-2jwqz" event={"ID":"a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76","Type":"ContainerDied","Data":"c10b50a50f4c642604eedcae7cebe23d333f20ab10d2d6c5c5bdcd9ffb9fc3a4"} Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.716109 4624 scope.go:117] "RemoveContainer" containerID="7c06635aefdafdc9274928ca3a51b5bc87729e28b5f2493d0376603b999ca4fb" Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.716289 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86d56d5f7b-2jwqz" Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.725361 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5a01279-9918-4adc-a38f-f102fd394ca0","Type":"ContainerStarted","Data":"6f899ddbe71b42769f9453dc6bc746f5c6136d04980c21964efbbd19e42ad081"} Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.733882 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d6f1635-4c52-4761-a2c7-38951659c26e","Type":"ContainerStarted","Data":"ad1ed466c009771054bf57eba5dfb5dbf4eb8b912a23547dd5b2956feffb0e11"} Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.736117 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76" (UID: "a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.746167 4624 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.746202 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6275g\" (UniqueName: \"kubernetes.io/projected/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-kube-api-access-6275g\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.746218 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.763037 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.787439 4624 scope.go:117] "RemoveContainer" containerID="53706d2bf9975b256367a481ba8656ad7a17b93df91d9501be276ee69712fb95" Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.837858 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-config" (OuterVolumeSpecName: "config") pod "a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76" (UID: "a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.849178 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.915787 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76" (UID: "a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:24 crc kubenswrapper[4624]: I1008 14:42:24.951190 4624 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:25 crc kubenswrapper[4624]: I1008 14:42:25.072203 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.072181194 podStartE2EDuration="5.072181194s" podCreationTimestamp="2025-10-08 14:42:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:42:24.769178771 +0000 UTC m=+1169.920113848" watchObservedRunningTime="2025-10-08 14:42:25.072181194 +0000 UTC m=+1170.223116271" Oct 08 14:42:25 crc kubenswrapper[4624]: I1008 14:42:25.074998 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86d56d5f7b-2jwqz"] Oct 08 14:42:25 crc kubenswrapper[4624]: I1008 14:42:25.084953 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-86d56d5f7b-2jwqz"] Oct 08 14:42:25 crc kubenswrapper[4624]: I1008 14:42:25.506796 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76" path="/var/lib/kubelet/pods/a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76/volumes" Oct 08 14:42:25 crc kubenswrapper[4624]: I1008 14:42:25.749996 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"92d52b0f-221f-4cc7-9157-3fef68ed6db6","Type":"ContainerStarted","Data":"6c3bcdbc830096e2a99c2e4c904e7ccf0ba820515aba0ad9665cda0dbf58ecc3"} Oct 08 14:42:26 crc kubenswrapper[4624]: I1008 14:42:26.777306 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"92d52b0f-221f-4cc7-9157-3fef68ed6db6","Type":"ContainerStarted","Data":"0370beffcd9029ca9aa02c016176968c9aa0a70ea4ddea1040bd12e0e46896fc"} Oct 08 14:42:26 crc kubenswrapper[4624]: I1008 14:42:26.779586 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d6f1635-4c52-4761-a2c7-38951659c26e","Type":"ContainerStarted","Data":"97f6e27f2025f265be2d27f95a4f06110e646d6696054cb92bffc61f466127f4"} Oct 08 14:42:27 crc kubenswrapper[4624]: I1008 14:42:27.485802 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c09025d6-b523-4828-ba54-217da2a5bfb3" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 14:42:27 crc kubenswrapper[4624]: I1008 14:42:27.723548 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:42:27 crc kubenswrapper[4624]: I1008 14:42:27.806514 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"92d52b0f-221f-4cc7-9157-3fef68ed6db6","Type":"ContainerStarted","Data":"f627a6d3e3509d80894084fb697794ea1607f15a77e3a1e1d0be006c52a9a14c"} Oct 08 14:42:27 crc kubenswrapper[4624]: I1008 14:42:27.820549 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d6f1635-4c52-4761-a2c7-38951659c26e","Type":"ContainerStarted","Data":"b130ae86c3d271cbd32f5c37c415ca89af49fc22733565a087612ae98cb9ce2c"} Oct 08 14:42:27 crc kubenswrapper[4624]: I1008 14:42:27.821501 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 08 14:42:27 crc kubenswrapper[4624]: I1008 14:42:27.847717 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-856dcb88f7-rh7ns"] Oct 08 14:42:27 crc kubenswrapper[4624]: I1008 14:42:27.848092 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" podUID="9d94066f-cfff-4398-8d02-b47b7ed819ac" containerName="dnsmasq-dns" containerID="cri-o://0c08e038f41782aa832f47830a6b0fe71b5e22e9408df381413d4ee178390906" gracePeriod=10 Oct 08 14:42:27 crc kubenswrapper[4624]: I1008 14:42:27.867549 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.867531048 podStartE2EDuration="5.867531048s" podCreationTimestamp="2025-10-08 14:42:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:42:27.840042424 +0000 UTC m=+1172.990977501" watchObservedRunningTime="2025-10-08 14:42:27.867531048 +0000 UTC m=+1173.018466125" Oct 08 14:42:27 crc kubenswrapper[4624]: I1008 14:42:27.883710 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.883685926 podStartE2EDuration="5.883685926s" podCreationTimestamp="2025-10-08 14:42:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:42:27.86721547 +0000 UTC m=+1173.018150547" watchObservedRunningTime="2025-10-08 14:42:27.883685926 +0000 UTC m=+1173.034621023" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.167167 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-65f89d8d74-ng4cv" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.475481 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.555806 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-ovsdbserver-sb\") pod \"9d94066f-cfff-4398-8d02-b47b7ed819ac\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.556172 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-ovsdbserver-nb\") pod \"9d94066f-cfff-4398-8d02-b47b7ed819ac\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.556317 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-config\") pod \"9d94066f-cfff-4398-8d02-b47b7ed819ac\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.557120 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhxk6\" (UniqueName: \"kubernetes.io/projected/9d94066f-cfff-4398-8d02-b47b7ed819ac-kube-api-access-dhxk6\") pod \"9d94066f-cfff-4398-8d02-b47b7ed819ac\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.557467 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-dns-swift-storage-0\") pod \"9d94066f-cfff-4398-8d02-b47b7ed819ac\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.557972 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-dns-svc\") pod \"9d94066f-cfff-4398-8d02-b47b7ed819ac\" (UID: \"9d94066f-cfff-4398-8d02-b47b7ed819ac\") " Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.562444 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d94066f-cfff-4398-8d02-b47b7ed819ac-kube-api-access-dhxk6" (OuterVolumeSpecName: "kube-api-access-dhxk6") pod "9d94066f-cfff-4398-8d02-b47b7ed819ac" (UID: "9d94066f-cfff-4398-8d02-b47b7ed819ac"). InnerVolumeSpecName "kube-api-access-dhxk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.632086 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-config" (OuterVolumeSpecName: "config") pod "9d94066f-cfff-4398-8d02-b47b7ed819ac" (UID: "9d94066f-cfff-4398-8d02-b47b7ed819ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.639452 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d94066f-cfff-4398-8d02-b47b7ed819ac" (UID: "9d94066f-cfff-4398-8d02-b47b7ed819ac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.653107 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d94066f-cfff-4398-8d02-b47b7ed819ac" (UID: "9d94066f-cfff-4398-8d02-b47b7ed819ac"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.661286 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d94066f-cfff-4398-8d02-b47b7ed819ac" (UID: "9d94066f-cfff-4398-8d02-b47b7ed819ac"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.661457 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.661492 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.661510 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.661520 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.661534 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhxk6\" (UniqueName: \"kubernetes.io/projected/9d94066f-cfff-4398-8d02-b47b7ed819ac-kube-api-access-dhxk6\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.683774 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9d94066f-cfff-4398-8d02-b47b7ed819ac" (UID: "9d94066f-cfff-4398-8d02-b47b7ed819ac"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.763572 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d94066f-cfff-4398-8d02-b47b7ed819ac-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.832585 4624 generic.go:334] "Generic (PLEG): container finished" podID="9d94066f-cfff-4398-8d02-b47b7ed819ac" containerID="0c08e038f41782aa832f47830a6b0fe71b5e22e9408df381413d4ee178390906" exitCode=0 Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.832652 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" event={"ID":"9d94066f-cfff-4398-8d02-b47b7ed819ac","Type":"ContainerDied","Data":"0c08e038f41782aa832f47830a6b0fe71b5e22e9408df381413d4ee178390906"} Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.833035 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" event={"ID":"9d94066f-cfff-4398-8d02-b47b7ed819ac","Type":"ContainerDied","Data":"9d70c422ea87ba6319d814b9ae489fa69611e57aa36710d3811114e2b2806fbe"} Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.832682 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-856dcb88f7-rh7ns" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.833067 4624 scope.go:117] "RemoveContainer" containerID="0c08e038f41782aa832f47830a6b0fe71b5e22e9408df381413d4ee178390906" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.862456 4624 scope.go:117] "RemoveContainer" containerID="71833c0660401888dfb263ff736e76c19ab57f55fcedf4a8b775d102ca10df75" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.875991 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-856dcb88f7-rh7ns"] Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.885765 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-856dcb88f7-rh7ns"] Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.893098 4624 scope.go:117] "RemoveContainer" containerID="0c08e038f41782aa832f47830a6b0fe71b5e22e9408df381413d4ee178390906" Oct 08 14:42:28 crc kubenswrapper[4624]: E1008 14:42:28.893608 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c08e038f41782aa832f47830a6b0fe71b5e22e9408df381413d4ee178390906\": container with ID starting with 0c08e038f41782aa832f47830a6b0fe71b5e22e9408df381413d4ee178390906 not found: ID does not exist" containerID="0c08e038f41782aa832f47830a6b0fe71b5e22e9408df381413d4ee178390906" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.893677 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c08e038f41782aa832f47830a6b0fe71b5e22e9408df381413d4ee178390906"} err="failed to get container status \"0c08e038f41782aa832f47830a6b0fe71b5e22e9408df381413d4ee178390906\": rpc error: code = NotFound desc = could not find container \"0c08e038f41782aa832f47830a6b0fe71b5e22e9408df381413d4ee178390906\": container with ID starting with 0c08e038f41782aa832f47830a6b0fe71b5e22e9408df381413d4ee178390906 not found: ID does not exist" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.893711 4624 scope.go:117] "RemoveContainer" containerID="71833c0660401888dfb263ff736e76c19ab57f55fcedf4a8b775d102ca10df75" Oct 08 14:42:28 crc kubenswrapper[4624]: E1008 14:42:28.894354 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71833c0660401888dfb263ff736e76c19ab57f55fcedf4a8b775d102ca10df75\": container with ID starting with 71833c0660401888dfb263ff736e76c19ab57f55fcedf4a8b775d102ca10df75 not found: ID does not exist" containerID="71833c0660401888dfb263ff736e76c19ab57f55fcedf4a8b775d102ca10df75" Oct 08 14:42:28 crc kubenswrapper[4624]: I1008 14:42:28.894385 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71833c0660401888dfb263ff736e76c19ab57f55fcedf4a8b775d102ca10df75"} err="failed to get container status \"71833c0660401888dfb263ff736e76c19ab57f55fcedf4a8b775d102ca10df75\": rpc error: code = NotFound desc = could not find container \"71833c0660401888dfb263ff736e76c19ab57f55fcedf4a8b775d102ca10df75\": container with ID starting with 71833c0660401888dfb263ff736e76c19ab57f55fcedf4a8b775d102ca10df75 not found: ID does not exist" Oct 08 14:42:29 crc kubenswrapper[4624]: I1008 14:42:29.478604 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d94066f-cfff-4398-8d02-b47b7ed819ac" path="/var/lib/kubelet/pods/9d94066f-cfff-4398-8d02-b47b7ed819ac/volumes" Oct 08 14:42:30 crc kubenswrapper[4624]: I1008 14:42:30.075997 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:42:30 crc kubenswrapper[4624]: I1008 14:42:30.076072 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:42:30 crc kubenswrapper[4624]: I1008 14:42:30.686033 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 08 14:42:30 crc kubenswrapper[4624]: I1008 14:42:30.686362 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 08 14:42:30 crc kubenswrapper[4624]: I1008 14:42:30.792811 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 08 14:42:30 crc kubenswrapper[4624]: I1008 14:42:30.792874 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 08 14:42:30 crc kubenswrapper[4624]: I1008 14:42:30.851652 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 08 14:42:30 crc kubenswrapper[4624]: I1008 14:42:30.851703 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.170348 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 08 14:42:31 crc kubenswrapper[4624]: E1008 14:42:31.170817 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d94066f-cfff-4398-8d02-b47b7ed819ac" containerName="init" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.170833 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d94066f-cfff-4398-8d02-b47b7ed819ac" containerName="init" Oct 08 14:42:31 crc kubenswrapper[4624]: E1008 14:42:31.170854 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76" containerName="neutron-httpd" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.170864 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76" containerName="neutron-httpd" Oct 08 14:42:31 crc kubenswrapper[4624]: E1008 14:42:31.170893 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76" containerName="neutron-api" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.170901 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76" containerName="neutron-api" Oct 08 14:42:31 crc kubenswrapper[4624]: E1008 14:42:31.170917 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d94066f-cfff-4398-8d02-b47b7ed819ac" containerName="dnsmasq-dns" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.170924 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d94066f-cfff-4398-8d02-b47b7ed819ac" containerName="dnsmasq-dns" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.171118 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76" containerName="neutron-httpd" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.171141 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d94066f-cfff-4398-8d02-b47b7ed819ac" containerName="dnsmasq-dns" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.171150 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4fb2ba1-4bf0-414a-a33d-b0db4f0e0a76" containerName="neutron-api" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.171751 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.178572 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-7mcpp" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.178804 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.178933 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.194050 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.214051 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf9t4\" (UniqueName: \"kubernetes.io/projected/c515985c-9b57-4136-bf01-b872e9caaec9-kube-api-access-xf9t4\") pod \"openstackclient\" (UID: \"c515985c-9b57-4136-bf01-b872e9caaec9\") " pod="openstack/openstackclient" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.214122 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c515985c-9b57-4136-bf01-b872e9caaec9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c515985c-9b57-4136-bf01-b872e9caaec9\") " pod="openstack/openstackclient" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.214180 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c515985c-9b57-4136-bf01-b872e9caaec9-openstack-config-secret\") pod \"openstackclient\" (UID: \"c515985c-9b57-4136-bf01-b872e9caaec9\") " pod="openstack/openstackclient" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.214242 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c515985c-9b57-4136-bf01-b872e9caaec9-openstack-config\") pod \"openstackclient\" (UID: \"c515985c-9b57-4136-bf01-b872e9caaec9\") " pod="openstack/openstackclient" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.315726 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c515985c-9b57-4136-bf01-b872e9caaec9-openstack-config-secret\") pod \"openstackclient\" (UID: \"c515985c-9b57-4136-bf01-b872e9caaec9\") " pod="openstack/openstackclient" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.315811 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c515985c-9b57-4136-bf01-b872e9caaec9-openstack-config\") pod \"openstackclient\" (UID: \"c515985c-9b57-4136-bf01-b872e9caaec9\") " pod="openstack/openstackclient" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.315914 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf9t4\" (UniqueName: \"kubernetes.io/projected/c515985c-9b57-4136-bf01-b872e9caaec9-kube-api-access-xf9t4\") pod \"openstackclient\" (UID: \"c515985c-9b57-4136-bf01-b872e9caaec9\") " pod="openstack/openstackclient" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.315971 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c515985c-9b57-4136-bf01-b872e9caaec9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c515985c-9b57-4136-bf01-b872e9caaec9\") " pod="openstack/openstackclient" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.316960 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c515985c-9b57-4136-bf01-b872e9caaec9-openstack-config\") pod \"openstackclient\" (UID: \"c515985c-9b57-4136-bf01-b872e9caaec9\") " pod="openstack/openstackclient" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.324392 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c515985c-9b57-4136-bf01-b872e9caaec9-openstack-config-secret\") pod \"openstackclient\" (UID: \"c515985c-9b57-4136-bf01-b872e9caaec9\") " pod="openstack/openstackclient" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.324560 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c515985c-9b57-4136-bf01-b872e9caaec9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c515985c-9b57-4136-bf01-b872e9caaec9\") " pod="openstack/openstackclient" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.345925 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf9t4\" (UniqueName: \"kubernetes.io/projected/c515985c-9b57-4136-bf01-b872e9caaec9-kube-api-access-xf9t4\") pod \"openstackclient\" (UID: \"c515985c-9b57-4136-bf01-b872e9caaec9\") " pod="openstack/openstackclient" Oct 08 14:42:31 crc kubenswrapper[4624]: I1008 14:42:31.516119 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 14:42:32 crc kubenswrapper[4624]: I1008 14:42:32.045883 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 08 14:42:32 crc kubenswrapper[4624]: I1008 14:42:32.088171 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 08 14:42:32 crc kubenswrapper[4624]: W1008 14:42:32.097334 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc515985c_9b57_4136_bf01_b872e9caaec9.slice/crio-42ccd3f00b99b27c65a5998fa0d5a791204d633f10246bd0dea74cdcf3e58099 WatchSource:0}: Error finding container 42ccd3f00b99b27c65a5998fa0d5a791204d633f10246bd0dea74cdcf3e58099: Status 404 returned error can't find the container with id 42ccd3f00b99b27c65a5998fa0d5a791204d633f10246bd0dea74cdcf3e58099 Oct 08 14:42:32 crc kubenswrapper[4624]: I1008 14:42:32.123806 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 14:42:32 crc kubenswrapper[4624]: I1008 14:42:32.871649 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c515985c-9b57-4136-bf01-b872e9caaec9","Type":"ContainerStarted","Data":"42ccd3f00b99b27c65a5998fa0d5a791204d633f10246bd0dea74cdcf3e58099"} Oct 08 14:42:32 crc kubenswrapper[4624]: I1008 14:42:32.871768 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c09025d6-b523-4828-ba54-217da2a5bfb3" containerName="cinder-scheduler" containerID="cri-o://57788224cfc376e852cc6304d54fbbb2e770efd9c1e7f9ffeabf9113a74fa988" gracePeriod=30 Oct 08 14:42:32 crc kubenswrapper[4624]: I1008 14:42:32.871826 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c09025d6-b523-4828-ba54-217da2a5bfb3" containerName="probe" containerID="cri-o://7ece144ab0d4935be29267e22c484c9b28ee1f11f24c3af44bf004228fea9832" gracePeriod=30 Oct 08 14:42:33 crc kubenswrapper[4624]: I1008 14:42:33.733000 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 08 14:42:33 crc kubenswrapper[4624]: I1008 14:42:33.733287 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 08 14:42:33 crc kubenswrapper[4624]: I1008 14:42:33.770885 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 08 14:42:33 crc kubenswrapper[4624]: I1008 14:42:33.796523 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 08 14:42:33 crc kubenswrapper[4624]: I1008 14:42:33.904811 4624 generic.go:334] "Generic (PLEG): container finished" podID="c09025d6-b523-4828-ba54-217da2a5bfb3" containerID="7ece144ab0d4935be29267e22c484c9b28ee1f11f24c3af44bf004228fea9832" exitCode=0 Oct 08 14:42:33 crc kubenswrapper[4624]: I1008 14:42:33.905705 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c09025d6-b523-4828-ba54-217da2a5bfb3","Type":"ContainerDied","Data":"7ece144ab0d4935be29267e22c484c9b28ee1f11f24c3af44bf004228fea9832"} Oct 08 14:42:33 crc kubenswrapper[4624]: I1008 14:42:33.905788 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 08 14:42:33 crc kubenswrapper[4624]: I1008 14:42:33.906066 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 08 14:42:35 crc kubenswrapper[4624]: I1008 14:42:35.925709 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 14:42:35 crc kubenswrapper[4624]: I1008 14:42:35.926049 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.425134 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="5d6f1635-4c52-4761-a2c7-38951659c26e" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.170:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.757685 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.758483 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.758925 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.759836 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.849426 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfhsc\" (UniqueName: \"kubernetes.io/projected/c09025d6-b523-4828-ba54-217da2a5bfb3-kube-api-access-cfhsc\") pod \"c09025d6-b523-4828-ba54-217da2a5bfb3\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.849481 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-config-data-custom\") pod \"c09025d6-b523-4828-ba54-217da2a5bfb3\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.849502 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c09025d6-b523-4828-ba54-217da2a5bfb3-etc-machine-id\") pod \"c09025d6-b523-4828-ba54-217da2a5bfb3\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.849544 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-combined-ca-bundle\") pod \"c09025d6-b523-4828-ba54-217da2a5bfb3\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.849625 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-config-data\") pod \"c09025d6-b523-4828-ba54-217da2a5bfb3\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.849699 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-scripts\") pod \"c09025d6-b523-4828-ba54-217da2a5bfb3\" (UID: \"c09025d6-b523-4828-ba54-217da2a5bfb3\") " Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.853750 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c09025d6-b523-4828-ba54-217da2a5bfb3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c09025d6-b523-4828-ba54-217da2a5bfb3" (UID: "c09025d6-b523-4828-ba54-217da2a5bfb3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.879249 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-scripts" (OuterVolumeSpecName: "scripts") pod "c09025d6-b523-4828-ba54-217da2a5bfb3" (UID: "c09025d6-b523-4828-ba54-217da2a5bfb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.891334 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c09025d6-b523-4828-ba54-217da2a5bfb3-kube-api-access-cfhsc" (OuterVolumeSpecName: "kube-api-access-cfhsc") pod "c09025d6-b523-4828-ba54-217da2a5bfb3" (UID: "c09025d6-b523-4828-ba54-217da2a5bfb3"). InnerVolumeSpecName "kube-api-access-cfhsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.901481 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c09025d6-b523-4828-ba54-217da2a5bfb3" (UID: "c09025d6-b523-4828-ba54-217da2a5bfb3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.958795 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.958821 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfhsc\" (UniqueName: \"kubernetes.io/projected/c09025d6-b523-4828-ba54-217da2a5bfb3-kube-api-access-cfhsc\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.958834 4624 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.958843 4624 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c09025d6-b523-4828-ba54-217da2a5bfb3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:37 crc kubenswrapper[4624]: I1008 14:42:37.968118 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c09025d6-b523-4828-ba54-217da2a5bfb3" (UID: "c09025d6-b523-4828-ba54-217da2a5bfb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:37.999840 4624 generic.go:334] "Generic (PLEG): container finished" podID="c09025d6-b523-4828-ba54-217da2a5bfb3" containerID="57788224cfc376e852cc6304d54fbbb2e770efd9c1e7f9ffeabf9113a74fa988" exitCode=0 Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.000932 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.001456 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c09025d6-b523-4828-ba54-217da2a5bfb3","Type":"ContainerDied","Data":"57788224cfc376e852cc6304d54fbbb2e770efd9c1e7f9ffeabf9113a74fa988"} Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.001487 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c09025d6-b523-4828-ba54-217da2a5bfb3","Type":"ContainerDied","Data":"269fc5fb413022bf3a19c98794af1d5a5bf36d6aec361f4403ac08104bba85c5"} Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.001513 4624 scope.go:117] "RemoveContainer" containerID="7ece144ab0d4935be29267e22c484c9b28ee1f11f24c3af44bf004228fea9832" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.045450 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.045549 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.054926 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-config-data" (OuterVolumeSpecName: "config-data") pod "c09025d6-b523-4828-ba54-217da2a5bfb3" (UID: "c09025d6-b523-4828-ba54-217da2a5bfb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.068984 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.069241 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09025d6-b523-4828-ba54-217da2a5bfb3-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.135749 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.363677 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.395295 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.418987 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 14:42:38 crc kubenswrapper[4624]: E1008 14:42:38.419436 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09025d6-b523-4828-ba54-217da2a5bfb3" containerName="probe" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.419454 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09025d6-b523-4828-ba54-217da2a5bfb3" containerName="probe" Oct 08 14:42:38 crc kubenswrapper[4624]: E1008 14:42:38.419478 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09025d6-b523-4828-ba54-217da2a5bfb3" containerName="cinder-scheduler" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.419487 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09025d6-b523-4828-ba54-217da2a5bfb3" containerName="cinder-scheduler" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.419704 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09025d6-b523-4828-ba54-217da2a5bfb3" containerName="probe" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.419732 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09025d6-b523-4828-ba54-217da2a5bfb3" containerName="cinder-scheduler" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.422540 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.432314 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.440976 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.484002 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a500ae8-578e-4045-8bfb-0a658340dc09-config-data\") pod \"cinder-scheduler-0\" (UID: \"5a500ae8-578e-4045-8bfb-0a658340dc09\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.484383 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a500ae8-578e-4045-8bfb-0a658340dc09-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5a500ae8-578e-4045-8bfb-0a658340dc09\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.484657 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdbh8\" (UniqueName: \"kubernetes.io/projected/5a500ae8-578e-4045-8bfb-0a658340dc09-kube-api-access-gdbh8\") pod \"cinder-scheduler-0\" (UID: \"5a500ae8-578e-4045-8bfb-0a658340dc09\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.484706 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a500ae8-578e-4045-8bfb-0a658340dc09-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5a500ae8-578e-4045-8bfb-0a658340dc09\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.484797 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a500ae8-578e-4045-8bfb-0a658340dc09-scripts\") pod \"cinder-scheduler-0\" (UID: \"5a500ae8-578e-4045-8bfb-0a658340dc09\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.484882 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a500ae8-578e-4045-8bfb-0a658340dc09-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5a500ae8-578e-4045-8bfb-0a658340dc09\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.588727 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdbh8\" (UniqueName: \"kubernetes.io/projected/5a500ae8-578e-4045-8bfb-0a658340dc09-kube-api-access-gdbh8\") pod \"cinder-scheduler-0\" (UID: \"5a500ae8-578e-4045-8bfb-0a658340dc09\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.588795 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a500ae8-578e-4045-8bfb-0a658340dc09-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5a500ae8-578e-4045-8bfb-0a658340dc09\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.588843 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a500ae8-578e-4045-8bfb-0a658340dc09-scripts\") pod \"cinder-scheduler-0\" (UID: \"5a500ae8-578e-4045-8bfb-0a658340dc09\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.588879 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a500ae8-578e-4045-8bfb-0a658340dc09-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5a500ae8-578e-4045-8bfb-0a658340dc09\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.588941 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a500ae8-578e-4045-8bfb-0a658340dc09-config-data\") pod \"cinder-scheduler-0\" (UID: \"5a500ae8-578e-4045-8bfb-0a658340dc09\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.588957 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a500ae8-578e-4045-8bfb-0a658340dc09-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5a500ae8-578e-4045-8bfb-0a658340dc09\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.589969 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a500ae8-578e-4045-8bfb-0a658340dc09-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5a500ae8-578e-4045-8bfb-0a658340dc09\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.604591 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a500ae8-578e-4045-8bfb-0a658340dc09-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5a500ae8-578e-4045-8bfb-0a658340dc09\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.606619 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a500ae8-578e-4045-8bfb-0a658340dc09-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5a500ae8-578e-4045-8bfb-0a658340dc09\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.612800 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a500ae8-578e-4045-8bfb-0a658340dc09-scripts\") pod \"cinder-scheduler-0\" (UID: \"5a500ae8-578e-4045-8bfb-0a658340dc09\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.620843 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a500ae8-578e-4045-8bfb-0a658340dc09-config-data\") pod \"cinder-scheduler-0\" (UID: \"5a500ae8-578e-4045-8bfb-0a658340dc09\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.621232 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdbh8\" (UniqueName: \"kubernetes.io/projected/5a500ae8-578e-4045-8bfb-0a658340dc09-kube-api-access-gdbh8\") pod \"cinder-scheduler-0\" (UID: \"5a500ae8-578e-4045-8bfb-0a658340dc09\") " pod="openstack/cinder-scheduler-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.643235 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 08 14:42:38 crc kubenswrapper[4624]: I1008 14:42:38.819947 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 14:42:39 crc kubenswrapper[4624]: I1008 14:42:39.477106 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c09025d6-b523-4828-ba54-217da2a5bfb3" path="/var/lib/kubelet/pods/c09025d6-b523-4828-ba54-217da2a5bfb3/volumes" Oct 08 14:42:40 crc kubenswrapper[4624]: I1008 14:42:40.814189 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-588c6684fb-k5v22"] Oct 08 14:42:40 crc kubenswrapper[4624]: I1008 14:42:40.827234 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-588c6684fb-k5v22" Oct 08 14:42:40 crc kubenswrapper[4624]: I1008 14:42:40.833960 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-bh2tl" Oct 08 14:42:40 crc kubenswrapper[4624]: I1008 14:42:40.834049 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Oct 08 14:42:40 crc kubenswrapper[4624]: I1008 14:42:40.834264 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Oct 08 14:42:40 crc kubenswrapper[4624]: I1008 14:42:40.866792 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-588c6684fb-k5v22"] Oct 08 14:42:40 crc kubenswrapper[4624]: I1008 14:42:40.949849 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b419234-067e-41ef-8382-55b89cafa33a-config-data-custom\") pod \"heat-engine-588c6684fb-k5v22\" (UID: \"9b419234-067e-41ef-8382-55b89cafa33a\") " pod="openstack/heat-engine-588c6684fb-k5v22" Oct 08 14:42:40 crc kubenswrapper[4624]: I1008 14:42:40.949921 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht7g6\" (UniqueName: \"kubernetes.io/projected/9b419234-067e-41ef-8382-55b89cafa33a-kube-api-access-ht7g6\") pod \"heat-engine-588c6684fb-k5v22\" (UID: \"9b419234-067e-41ef-8382-55b89cafa33a\") " pod="openstack/heat-engine-588c6684fb-k5v22" Oct 08 14:42:40 crc kubenswrapper[4624]: I1008 14:42:40.949949 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b419234-067e-41ef-8382-55b89cafa33a-config-data\") pod \"heat-engine-588c6684fb-k5v22\" (UID: \"9b419234-067e-41ef-8382-55b89cafa33a\") " pod="openstack/heat-engine-588c6684fb-k5v22" Oct 08 14:42:40 crc kubenswrapper[4624]: I1008 14:42:40.949975 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b419234-067e-41ef-8382-55b89cafa33a-combined-ca-bundle\") pod \"heat-engine-588c6684fb-k5v22\" (UID: \"9b419234-067e-41ef-8382-55b89cafa33a\") " pod="openstack/heat-engine-588c6684fb-k5v22" Oct 08 14:42:40 crc kubenswrapper[4624]: I1008 14:42:40.989704 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-545dc78c-24gzk"] Oct 08 14:42:40 crc kubenswrapper[4624]: I1008 14:42:40.991339 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.012781 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-545dc78c-24gzk"] Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.026870 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7ccc99c6fd-fx7s5"] Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.036503 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7ccc99c6fd-fx7s5" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.055334 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.059597 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-ovsdbserver-nb\") pod \"dnsmasq-dns-545dc78c-24gzk\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.060103 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-ovsdbserver-sb\") pod \"dnsmasq-dns-545dc78c-24gzk\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.060155 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-dns-swift-storage-0\") pod \"dnsmasq-dns-545dc78c-24gzk\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.060220 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqt2w\" (UniqueName: \"kubernetes.io/projected/47b6d75e-5f52-46b7-ad81-0efc8ae08807-kube-api-access-fqt2w\") pod \"dnsmasq-dns-545dc78c-24gzk\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.060311 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b419234-067e-41ef-8382-55b89cafa33a-config-data-custom\") pod \"heat-engine-588c6684fb-k5v22\" (UID: \"9b419234-067e-41ef-8382-55b89cafa33a\") " pod="openstack/heat-engine-588c6684fb-k5v22" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.060354 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-dns-svc\") pod \"dnsmasq-dns-545dc78c-24gzk\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.060404 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht7g6\" (UniqueName: \"kubernetes.io/projected/9b419234-067e-41ef-8382-55b89cafa33a-kube-api-access-ht7g6\") pod \"heat-engine-588c6684fb-k5v22\" (UID: \"9b419234-067e-41ef-8382-55b89cafa33a\") " pod="openstack/heat-engine-588c6684fb-k5v22" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.060440 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b419234-067e-41ef-8382-55b89cafa33a-config-data\") pod \"heat-engine-588c6684fb-k5v22\" (UID: \"9b419234-067e-41ef-8382-55b89cafa33a\") " pod="openstack/heat-engine-588c6684fb-k5v22" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.060471 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b419234-067e-41ef-8382-55b89cafa33a-combined-ca-bundle\") pod \"heat-engine-588c6684fb-k5v22\" (UID: \"9b419234-067e-41ef-8382-55b89cafa33a\") " pod="openstack/heat-engine-588c6684fb-k5v22" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.060502 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-config\") pod \"dnsmasq-dns-545dc78c-24gzk\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.073792 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7ccc99c6fd-fx7s5"] Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.093881 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht7g6\" (UniqueName: \"kubernetes.io/projected/9b419234-067e-41ef-8382-55b89cafa33a-kube-api-access-ht7g6\") pod \"heat-engine-588c6684fb-k5v22\" (UID: \"9b419234-067e-41ef-8382-55b89cafa33a\") " pod="openstack/heat-engine-588c6684fb-k5v22" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.117359 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-bbd685659-f7cgg"] Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.118464 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b419234-067e-41ef-8382-55b89cafa33a-combined-ca-bundle\") pod \"heat-engine-588c6684fb-k5v22\" (UID: \"9b419234-067e-41ef-8382-55b89cafa33a\") " pod="openstack/heat-engine-588c6684fb-k5v22" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.118704 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-bbd685659-f7cgg" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.124196 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.138695 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b419234-067e-41ef-8382-55b89cafa33a-config-data\") pod \"heat-engine-588c6684fb-k5v22\" (UID: \"9b419234-067e-41ef-8382-55b89cafa33a\") " pod="openstack/heat-engine-588c6684fb-k5v22" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.139061 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b419234-067e-41ef-8382-55b89cafa33a-config-data-custom\") pod \"heat-engine-588c6684fb-k5v22\" (UID: \"9b419234-067e-41ef-8382-55b89cafa33a\") " pod="openstack/heat-engine-588c6684fb-k5v22" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.162813 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9954x\" (UniqueName: \"kubernetes.io/projected/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-kube-api-access-9954x\") pod \"heat-api-7ccc99c6fd-fx7s5\" (UID: \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\") " pod="openstack/heat-api-7ccc99c6fd-fx7s5" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.163354 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-config\") pod \"dnsmasq-dns-545dc78c-24gzk\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.163536 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-ovsdbserver-nb\") pod \"dnsmasq-dns-545dc78c-24gzk\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.163619 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-config-data\") pod \"heat-api-7ccc99c6fd-fx7s5\" (UID: \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\") " pod="openstack/heat-api-7ccc99c6fd-fx7s5" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.163721 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-ovsdbserver-sb\") pod \"dnsmasq-dns-545dc78c-24gzk\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.163814 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-dns-swift-storage-0\") pod \"dnsmasq-dns-545dc78c-24gzk\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.163883 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-config-data-custom\") pod \"heat-api-7ccc99c6fd-fx7s5\" (UID: \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\") " pod="openstack/heat-api-7ccc99c6fd-fx7s5" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.163973 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-combined-ca-bundle\") pod \"heat-api-7ccc99c6fd-fx7s5\" (UID: \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\") " pod="openstack/heat-api-7ccc99c6fd-fx7s5" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.164051 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqt2w\" (UniqueName: \"kubernetes.io/projected/47b6d75e-5f52-46b7-ad81-0efc8ae08807-kube-api-access-fqt2w\") pod \"dnsmasq-dns-545dc78c-24gzk\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.164186 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-dns-svc\") pod \"dnsmasq-dns-545dc78c-24gzk\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.166492 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-dns-svc\") pod \"dnsmasq-dns-545dc78c-24gzk\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.171090 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-588c6684fb-k5v22" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.175880 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-ovsdbserver-sb\") pod \"dnsmasq-dns-545dc78c-24gzk\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.176731 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-dns-swift-storage-0\") pod \"dnsmasq-dns-545dc78c-24gzk\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.177329 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-config\") pod \"dnsmasq-dns-545dc78c-24gzk\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.177554 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-ovsdbserver-nb\") pod \"dnsmasq-dns-545dc78c-24gzk\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.236196 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqt2w\" (UniqueName: \"kubernetes.io/projected/47b6d75e-5f52-46b7-ad81-0efc8ae08807-kube-api-access-fqt2w\") pod \"dnsmasq-dns-545dc78c-24gzk\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.244771 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-bbd685659-f7cgg"] Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.266088 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-combined-ca-bundle\") pod \"heat-api-7ccc99c6fd-fx7s5\" (UID: \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\") " pod="openstack/heat-api-7ccc99c6fd-fx7s5" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.266201 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-combined-ca-bundle\") pod \"heat-cfnapi-bbd685659-f7cgg\" (UID: \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\") " pod="openstack/heat-cfnapi-bbd685659-f7cgg" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.266229 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9954x\" (UniqueName: \"kubernetes.io/projected/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-kube-api-access-9954x\") pod \"heat-api-7ccc99c6fd-fx7s5\" (UID: \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\") " pod="openstack/heat-api-7ccc99c6fd-fx7s5" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.266267 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-config-data-custom\") pod \"heat-cfnapi-bbd685659-f7cgg\" (UID: \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\") " pod="openstack/heat-cfnapi-bbd685659-f7cgg" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.266297 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8lj7\" (UniqueName: \"kubernetes.io/projected/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-kube-api-access-b8lj7\") pod \"heat-cfnapi-bbd685659-f7cgg\" (UID: \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\") " pod="openstack/heat-cfnapi-bbd685659-f7cgg" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.266323 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-config-data\") pod \"heat-cfnapi-bbd685659-f7cgg\" (UID: \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\") " pod="openstack/heat-cfnapi-bbd685659-f7cgg" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.266360 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-config-data\") pod \"heat-api-7ccc99c6fd-fx7s5\" (UID: \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\") " pod="openstack/heat-api-7ccc99c6fd-fx7s5" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.266386 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-config-data-custom\") pod \"heat-api-7ccc99c6fd-fx7s5\" (UID: \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\") " pod="openstack/heat-api-7ccc99c6fd-fx7s5" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.274530 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-combined-ca-bundle\") pod \"heat-api-7ccc99c6fd-fx7s5\" (UID: \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\") " pod="openstack/heat-api-7ccc99c6fd-fx7s5" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.279370 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-config-data\") pod \"heat-api-7ccc99c6fd-fx7s5\" (UID: \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\") " pod="openstack/heat-api-7ccc99c6fd-fx7s5" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.285700 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-config-data-custom\") pod \"heat-api-7ccc99c6fd-fx7s5\" (UID: \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\") " pod="openstack/heat-api-7ccc99c6fd-fx7s5" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.304488 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9954x\" (UniqueName: \"kubernetes.io/projected/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-kube-api-access-9954x\") pod \"heat-api-7ccc99c6fd-fx7s5\" (UID: \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\") " pod="openstack/heat-api-7ccc99c6fd-fx7s5" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.355693 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.370394 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-combined-ca-bundle\") pod \"heat-cfnapi-bbd685659-f7cgg\" (UID: \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\") " pod="openstack/heat-cfnapi-bbd685659-f7cgg" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.370477 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-config-data-custom\") pod \"heat-cfnapi-bbd685659-f7cgg\" (UID: \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\") " pod="openstack/heat-cfnapi-bbd685659-f7cgg" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.370515 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8lj7\" (UniqueName: \"kubernetes.io/projected/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-kube-api-access-b8lj7\") pod \"heat-cfnapi-bbd685659-f7cgg\" (UID: \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\") " pod="openstack/heat-cfnapi-bbd685659-f7cgg" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.370545 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-config-data\") pod \"heat-cfnapi-bbd685659-f7cgg\" (UID: \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\") " pod="openstack/heat-cfnapi-bbd685659-f7cgg" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.380971 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-config-data-custom\") pod \"heat-cfnapi-bbd685659-f7cgg\" (UID: \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\") " pod="openstack/heat-cfnapi-bbd685659-f7cgg" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.383843 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-config-data\") pod \"heat-cfnapi-bbd685659-f7cgg\" (UID: \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\") " pod="openstack/heat-cfnapi-bbd685659-f7cgg" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.385083 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-combined-ca-bundle\") pod \"heat-cfnapi-bbd685659-f7cgg\" (UID: \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\") " pod="openstack/heat-cfnapi-bbd685659-f7cgg" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.414758 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8lj7\" (UniqueName: \"kubernetes.io/projected/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-kube-api-access-b8lj7\") pod \"heat-cfnapi-bbd685659-f7cgg\" (UID: \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\") " pod="openstack/heat-cfnapi-bbd685659-f7cgg" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.535113 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7ccc99c6fd-fx7s5" Oct 08 14:42:41 crc kubenswrapper[4624]: I1008 14:42:41.567484 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-bbd685659-f7cgg" Oct 08 14:42:42 crc kubenswrapper[4624]: I1008 14:42:42.433935 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="5d6f1635-4c52-4761-a2c7-38951659c26e" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.170:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 14:42:43 crc kubenswrapper[4624]: I1008 14:42:43.412910 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="5d6f1635-4c52-4761-a2c7-38951659c26e" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.170:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 14:42:45 crc kubenswrapper[4624]: I1008 14:42:45.108928 4624 generic.go:334] "Generic (PLEG): container finished" podID="4b203558-1aea-4672-871f-d2dca324a585" containerID="6ec33217cf79e738ccd4fb8b5dfdb26af9d5223b52e13ca9693c35de2207761b" exitCode=137 Oct 08 14:42:45 crc kubenswrapper[4624]: I1008 14:42:45.109221 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6cd65c74-7vqb5" event={"ID":"4b203558-1aea-4672-871f-d2dca324a585","Type":"ContainerDied","Data":"6ec33217cf79e738ccd4fb8b5dfdb26af9d5223b52e13ca9693c35de2207761b"} Oct 08 14:42:45 crc kubenswrapper[4624]: I1008 14:42:45.116128 4624 generic.go:334] "Generic (PLEG): container finished" podID="378be2ad-3335-409f-b2eb-60b3997ed4f8" containerID="f9f47f6fddd5455b79c7a8df01c2e3793521a20e5f22d936d7fe100acfb88684" exitCode=137 Oct 08 14:42:45 crc kubenswrapper[4624]: I1008 14:42:45.116196 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67f45f8444-g8bbs" event={"ID":"378be2ad-3335-409f-b2eb-60b3997ed4f8","Type":"ContainerDied","Data":"f9f47f6fddd5455b79c7a8df01c2e3793521a20e5f22d936d7fe100acfb88684"} Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.326781 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.657598 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-59d86bf959-vq2ld"] Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.659297 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.662995 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.663171 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.664125 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.672816 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-59d86bf959-vq2ld"] Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.695409 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d7c08e42-5aca-4394-952c-5649ba096a8f-etc-swift\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.695464 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c08e42-5aca-4394-952c-5649ba096a8f-combined-ca-bundle\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.695503 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56268\" (UniqueName: \"kubernetes.io/projected/d7c08e42-5aca-4394-952c-5649ba096a8f-kube-api-access-56268\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.695668 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c08e42-5aca-4394-952c-5649ba096a8f-config-data\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.695707 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7c08e42-5aca-4394-952c-5649ba096a8f-public-tls-certs\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.695746 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7c08e42-5aca-4394-952c-5649ba096a8f-internal-tls-certs\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.695790 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7c08e42-5aca-4394-952c-5649ba096a8f-run-httpd\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.695898 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7c08e42-5aca-4394-952c-5649ba096a8f-log-httpd\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.798005 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7c08e42-5aca-4394-952c-5649ba096a8f-log-httpd\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.798093 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d7c08e42-5aca-4394-952c-5649ba096a8f-etc-swift\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.798120 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c08e42-5aca-4394-952c-5649ba096a8f-combined-ca-bundle\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.798246 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56268\" (UniqueName: \"kubernetes.io/projected/d7c08e42-5aca-4394-952c-5649ba096a8f-kube-api-access-56268\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.798574 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7c08e42-5aca-4394-952c-5649ba096a8f-log-httpd\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.799113 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c08e42-5aca-4394-952c-5649ba096a8f-config-data\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.799141 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7c08e42-5aca-4394-952c-5649ba096a8f-public-tls-certs\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.799180 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7c08e42-5aca-4394-952c-5649ba096a8f-internal-tls-certs\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.799197 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7c08e42-5aca-4394-952c-5649ba096a8f-run-httpd\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.799484 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7c08e42-5aca-4394-952c-5649ba096a8f-run-httpd\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.804967 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7c08e42-5aca-4394-952c-5649ba096a8f-internal-tls-certs\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.809403 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7c08e42-5aca-4394-952c-5649ba096a8f-public-tls-certs\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.809581 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c08e42-5aca-4394-952c-5649ba096a8f-combined-ca-bundle\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.815788 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d7c08e42-5aca-4394-952c-5649ba096a8f-etc-swift\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.819061 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c08e42-5aca-4394-952c-5649ba096a8f-config-data\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.819438 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56268\" (UniqueName: \"kubernetes.io/projected/d7c08e42-5aca-4394-952c-5649ba096a8f-kube-api-access-56268\") pod \"swift-proxy-59d86bf959-vq2ld\" (UID: \"d7c08e42-5aca-4394-952c-5649ba096a8f\") " pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:46 crc kubenswrapper[4624]: I1008 14:42:46.980937 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.737674 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7674ff4df6-6crwz"] Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.738977 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7674ff4df6-6crwz" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.755857 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-775f4cbd6b-68btn"] Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.757332 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.774972 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5d66f5894-xhq8d"] Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.782429 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.815142 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5d66f5894-xhq8d"] Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.819771 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xstc\" (UniqueName: \"kubernetes.io/projected/39ab7f72-0992-4fcb-aee5-1db66762d854-kube-api-access-8xstc\") pod \"heat-api-5d66f5894-xhq8d\" (UID: \"39ab7f72-0992-4fcb-aee5-1db66762d854\") " pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.819814 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93cce62f-6d52-4afd-aa59-e2adac63a30f-config-data\") pod \"heat-engine-7674ff4df6-6crwz\" (UID: \"93cce62f-6d52-4afd-aa59-e2adac63a30f\") " pod="openstack/heat-engine-7674ff4df6-6crwz" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.819876 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11363bb9-1c0c-46f9-a40f-85537a193cb5-config-data\") pod \"heat-cfnapi-775f4cbd6b-68btn\" (UID: \"11363bb9-1c0c-46f9-a40f-85537a193cb5\") " pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.819903 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11363bb9-1c0c-46f9-a40f-85537a193cb5-combined-ca-bundle\") pod \"heat-cfnapi-775f4cbd6b-68btn\" (UID: \"11363bb9-1c0c-46f9-a40f-85537a193cb5\") " pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.819930 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbxp2\" (UniqueName: \"kubernetes.io/projected/93cce62f-6d52-4afd-aa59-e2adac63a30f-kube-api-access-mbxp2\") pod \"heat-engine-7674ff4df6-6crwz\" (UID: \"93cce62f-6d52-4afd-aa59-e2adac63a30f\") " pod="openstack/heat-engine-7674ff4df6-6crwz" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.819951 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93cce62f-6d52-4afd-aa59-e2adac63a30f-config-data-custom\") pod \"heat-engine-7674ff4df6-6crwz\" (UID: \"93cce62f-6d52-4afd-aa59-e2adac63a30f\") " pod="openstack/heat-engine-7674ff4df6-6crwz" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.819968 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11363bb9-1c0c-46f9-a40f-85537a193cb5-config-data-custom\") pod \"heat-cfnapi-775f4cbd6b-68btn\" (UID: \"11363bb9-1c0c-46f9-a40f-85537a193cb5\") " pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.819986 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93cce62f-6d52-4afd-aa59-e2adac63a30f-combined-ca-bundle\") pod \"heat-engine-7674ff4df6-6crwz\" (UID: \"93cce62f-6d52-4afd-aa59-e2adac63a30f\") " pod="openstack/heat-engine-7674ff4df6-6crwz" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.820024 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39ab7f72-0992-4fcb-aee5-1db66762d854-config-data-custom\") pod \"heat-api-5d66f5894-xhq8d\" (UID: \"39ab7f72-0992-4fcb-aee5-1db66762d854\") " pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.820059 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ab7f72-0992-4fcb-aee5-1db66762d854-combined-ca-bundle\") pod \"heat-api-5d66f5894-xhq8d\" (UID: \"39ab7f72-0992-4fcb-aee5-1db66762d854\") " pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.820075 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39ab7f72-0992-4fcb-aee5-1db66762d854-config-data\") pod \"heat-api-5d66f5894-xhq8d\" (UID: \"39ab7f72-0992-4fcb-aee5-1db66762d854\") " pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.820116 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44b8x\" (UniqueName: \"kubernetes.io/projected/11363bb9-1c0c-46f9-a40f-85537a193cb5-kube-api-access-44b8x\") pod \"heat-cfnapi-775f4cbd6b-68btn\" (UID: \"11363bb9-1c0c-46f9-a40f-85537a193cb5\") " pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.861712 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7674ff4df6-6crwz"] Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.898736 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-775f4cbd6b-68btn"] Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.921392 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93cce62f-6d52-4afd-aa59-e2adac63a30f-combined-ca-bundle\") pod \"heat-engine-7674ff4df6-6crwz\" (UID: \"93cce62f-6d52-4afd-aa59-e2adac63a30f\") " pod="openstack/heat-engine-7674ff4df6-6crwz" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.921488 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39ab7f72-0992-4fcb-aee5-1db66762d854-config-data-custom\") pod \"heat-api-5d66f5894-xhq8d\" (UID: \"39ab7f72-0992-4fcb-aee5-1db66762d854\") " pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.921543 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ab7f72-0992-4fcb-aee5-1db66762d854-combined-ca-bundle\") pod \"heat-api-5d66f5894-xhq8d\" (UID: \"39ab7f72-0992-4fcb-aee5-1db66762d854\") " pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.921568 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39ab7f72-0992-4fcb-aee5-1db66762d854-config-data\") pod \"heat-api-5d66f5894-xhq8d\" (UID: \"39ab7f72-0992-4fcb-aee5-1db66762d854\") " pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.921618 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44b8x\" (UniqueName: \"kubernetes.io/projected/11363bb9-1c0c-46f9-a40f-85537a193cb5-kube-api-access-44b8x\") pod \"heat-cfnapi-775f4cbd6b-68btn\" (UID: \"11363bb9-1c0c-46f9-a40f-85537a193cb5\") " pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.921699 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xstc\" (UniqueName: \"kubernetes.io/projected/39ab7f72-0992-4fcb-aee5-1db66762d854-kube-api-access-8xstc\") pod \"heat-api-5d66f5894-xhq8d\" (UID: \"39ab7f72-0992-4fcb-aee5-1db66762d854\") " pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.921730 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93cce62f-6d52-4afd-aa59-e2adac63a30f-config-data\") pod \"heat-engine-7674ff4df6-6crwz\" (UID: \"93cce62f-6d52-4afd-aa59-e2adac63a30f\") " pod="openstack/heat-engine-7674ff4df6-6crwz" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.921789 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11363bb9-1c0c-46f9-a40f-85537a193cb5-config-data\") pod \"heat-cfnapi-775f4cbd6b-68btn\" (UID: \"11363bb9-1c0c-46f9-a40f-85537a193cb5\") " pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.921812 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11363bb9-1c0c-46f9-a40f-85537a193cb5-combined-ca-bundle\") pod \"heat-cfnapi-775f4cbd6b-68btn\" (UID: \"11363bb9-1c0c-46f9-a40f-85537a193cb5\") " pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.921840 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbxp2\" (UniqueName: \"kubernetes.io/projected/93cce62f-6d52-4afd-aa59-e2adac63a30f-kube-api-access-mbxp2\") pod \"heat-engine-7674ff4df6-6crwz\" (UID: \"93cce62f-6d52-4afd-aa59-e2adac63a30f\") " pod="openstack/heat-engine-7674ff4df6-6crwz" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.921866 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93cce62f-6d52-4afd-aa59-e2adac63a30f-config-data-custom\") pod \"heat-engine-7674ff4df6-6crwz\" (UID: \"93cce62f-6d52-4afd-aa59-e2adac63a30f\") " pod="openstack/heat-engine-7674ff4df6-6crwz" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.921890 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11363bb9-1c0c-46f9-a40f-85537a193cb5-config-data-custom\") pod \"heat-cfnapi-775f4cbd6b-68btn\" (UID: \"11363bb9-1c0c-46f9-a40f-85537a193cb5\") " pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.933268 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11363bb9-1c0c-46f9-a40f-85537a193cb5-combined-ca-bundle\") pod \"heat-cfnapi-775f4cbd6b-68btn\" (UID: \"11363bb9-1c0c-46f9-a40f-85537a193cb5\") " pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.934941 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93cce62f-6d52-4afd-aa59-e2adac63a30f-config-data\") pod \"heat-engine-7674ff4df6-6crwz\" (UID: \"93cce62f-6d52-4afd-aa59-e2adac63a30f\") " pod="openstack/heat-engine-7674ff4df6-6crwz" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.937347 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93cce62f-6d52-4afd-aa59-e2adac63a30f-combined-ca-bundle\") pod \"heat-engine-7674ff4df6-6crwz\" (UID: \"93cce62f-6d52-4afd-aa59-e2adac63a30f\") " pod="openstack/heat-engine-7674ff4df6-6crwz" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.938754 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ab7f72-0992-4fcb-aee5-1db66762d854-combined-ca-bundle\") pod \"heat-api-5d66f5894-xhq8d\" (UID: \"39ab7f72-0992-4fcb-aee5-1db66762d854\") " pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.939266 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11363bb9-1c0c-46f9-a40f-85537a193cb5-config-data\") pod \"heat-cfnapi-775f4cbd6b-68btn\" (UID: \"11363bb9-1c0c-46f9-a40f-85537a193cb5\") " pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.939531 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11363bb9-1c0c-46f9-a40f-85537a193cb5-config-data-custom\") pod \"heat-cfnapi-775f4cbd6b-68btn\" (UID: \"11363bb9-1c0c-46f9-a40f-85537a193cb5\") " pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.943921 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39ab7f72-0992-4fcb-aee5-1db66762d854-config-data\") pod \"heat-api-5d66f5894-xhq8d\" (UID: \"39ab7f72-0992-4fcb-aee5-1db66762d854\") " pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.944585 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93cce62f-6d52-4afd-aa59-e2adac63a30f-config-data-custom\") pod \"heat-engine-7674ff4df6-6crwz\" (UID: \"93cce62f-6d52-4afd-aa59-e2adac63a30f\") " pod="openstack/heat-engine-7674ff4df6-6crwz" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.963514 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xstc\" (UniqueName: \"kubernetes.io/projected/39ab7f72-0992-4fcb-aee5-1db66762d854-kube-api-access-8xstc\") pod \"heat-api-5d66f5894-xhq8d\" (UID: \"39ab7f72-0992-4fcb-aee5-1db66762d854\") " pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.966520 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44b8x\" (UniqueName: \"kubernetes.io/projected/11363bb9-1c0c-46f9-a40f-85537a193cb5-kube-api-access-44b8x\") pod \"heat-cfnapi-775f4cbd6b-68btn\" (UID: \"11363bb9-1c0c-46f9-a40f-85537a193cb5\") " pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.985341 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39ab7f72-0992-4fcb-aee5-1db66762d854-config-data-custom\") pod \"heat-api-5d66f5894-xhq8d\" (UID: \"39ab7f72-0992-4fcb-aee5-1db66762d854\") " pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:42:47 crc kubenswrapper[4624]: I1008 14:42:47.985842 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbxp2\" (UniqueName: \"kubernetes.io/projected/93cce62f-6d52-4afd-aa59-e2adac63a30f-kube-api-access-mbxp2\") pod \"heat-engine-7674ff4df6-6crwz\" (UID: \"93cce62f-6d52-4afd-aa59-e2adac63a30f\") " pod="openstack/heat-engine-7674ff4df6-6crwz" Oct 08 14:42:48 crc kubenswrapper[4624]: I1008 14:42:48.070142 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7674ff4df6-6crwz" Oct 08 14:42:48 crc kubenswrapper[4624]: I1008 14:42:48.106326 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:42:48 crc kubenswrapper[4624]: I1008 14:42:48.122180 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:42:49 crc kubenswrapper[4624]: E1008 14:42:49.768383 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-openstackclient:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:42:49 crc kubenswrapper[4624]: E1008 14:42:49.768707 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-openstackclient:b78cfc68a577b1553523c8a70a34e297" Oct 08 14:42:49 crc kubenswrapper[4624]: E1008 14:42:49.768844 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-openstackclient:b78cfc68a577b1553523c8a70a34e297,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f9hc6h64dh679hbdh65h654hch6h658h54bh644h64fh547h5bh5f6h5fdh79hb4h685h75h6fh86h5b8hb9h7ch556h686h5bchb8h5f4q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf9t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(c515985c-9b57-4136-bf01-b872e9caaec9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 14:42:49 crc kubenswrapper[4624]: E1008 14:42:49.770412 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="c515985c-9b57-4136-bf01-b872e9caaec9" Oct 08 14:42:49 crc kubenswrapper[4624]: I1008 14:42:49.968916 4624 scope.go:117] "RemoveContainer" containerID="57788224cfc376e852cc6304d54fbbb2e770efd9c1e7f9ffeabf9113a74fa988" Oct 08 14:42:50 crc kubenswrapper[4624]: I1008 14:42:50.201960 4624 scope.go:117] "RemoveContainer" containerID="7ece144ab0d4935be29267e22c484c9b28ee1f11f24c3af44bf004228fea9832" Oct 08 14:42:50 crc kubenswrapper[4624]: E1008 14:42:50.202897 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ece144ab0d4935be29267e22c484c9b28ee1f11f24c3af44bf004228fea9832\": container with ID starting with 7ece144ab0d4935be29267e22c484c9b28ee1f11f24c3af44bf004228fea9832 not found: ID does not exist" containerID="7ece144ab0d4935be29267e22c484c9b28ee1f11f24c3af44bf004228fea9832" Oct 08 14:42:50 crc kubenswrapper[4624]: I1008 14:42:50.202920 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ece144ab0d4935be29267e22c484c9b28ee1f11f24c3af44bf004228fea9832"} err="failed to get container status \"7ece144ab0d4935be29267e22c484c9b28ee1f11f24c3af44bf004228fea9832\": rpc error: code = NotFound desc = could not find container \"7ece144ab0d4935be29267e22c484c9b28ee1f11f24c3af44bf004228fea9832\": container with ID starting with 7ece144ab0d4935be29267e22c484c9b28ee1f11f24c3af44bf004228fea9832 not found: ID does not exist" Oct 08 14:42:50 crc kubenswrapper[4624]: I1008 14:42:50.202939 4624 scope.go:117] "RemoveContainer" containerID="57788224cfc376e852cc6304d54fbbb2e770efd9c1e7f9ffeabf9113a74fa988" Oct 08 14:42:50 crc kubenswrapper[4624]: E1008 14:42:50.216078 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57788224cfc376e852cc6304d54fbbb2e770efd9c1e7f9ffeabf9113a74fa988\": container with ID starting with 57788224cfc376e852cc6304d54fbbb2e770efd9c1e7f9ffeabf9113a74fa988 not found: ID does not exist" containerID="57788224cfc376e852cc6304d54fbbb2e770efd9c1e7f9ffeabf9113a74fa988" Oct 08 14:42:50 crc kubenswrapper[4624]: I1008 14:42:50.216152 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57788224cfc376e852cc6304d54fbbb2e770efd9c1e7f9ffeabf9113a74fa988"} err="failed to get container status \"57788224cfc376e852cc6304d54fbbb2e770efd9c1e7f9ffeabf9113a74fa988\": rpc error: code = NotFound desc = could not find container \"57788224cfc376e852cc6304d54fbbb2e770efd9c1e7f9ffeabf9113a74fa988\": container with ID starting with 57788224cfc376e852cc6304d54fbbb2e770efd9c1e7f9ffeabf9113a74fa988 not found: ID does not exist" Oct 08 14:42:50 crc kubenswrapper[4624]: E1008 14:42:50.220760 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/podified-antelope-centos9/openstack-openstackclient:b78cfc68a577b1553523c8a70a34e297\\\"\"" pod="openstack/openstackclient" podUID="c515985c-9b57-4136-bf01-b872e9caaec9" Oct 08 14:42:50 crc kubenswrapper[4624]: W1008 14:42:50.767901 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93cce62f_6d52_4afd_aa59_e2adac63a30f.slice/crio-4230de3987a092ffcab16d0de05eb94828d3d87cbc70ac6695a01edc4f1624fd WatchSource:0}: Error finding container 4230de3987a092ffcab16d0de05eb94828d3d87cbc70ac6695a01edc4f1624fd: Status 404 returned error can't find the container with id 4230de3987a092ffcab16d0de05eb94828d3d87cbc70ac6695a01edc4f1624fd Oct 08 14:42:50 crc kubenswrapper[4624]: I1008 14:42:50.822021 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7674ff4df6-6crwz"] Oct 08 14:42:50 crc kubenswrapper[4624]: I1008 14:42:50.841933 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-bbd685659-f7cgg"] Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.233047 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67f45f8444-g8bbs" event={"ID":"378be2ad-3335-409f-b2eb-60b3997ed4f8","Type":"ContainerStarted","Data":"da715b96eb2640f5105e03011f83947a4366c774c3377e422913cdf2a5cee143"} Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.262826 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7674ff4df6-6crwz" event={"ID":"93cce62f-6d52-4afd-aa59-e2adac63a30f","Type":"ContainerStarted","Data":"03797685fbc2431c2b37ae0fc4419813632856523458b505f6ab4734684c4696"} Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.262894 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7674ff4df6-6crwz" event={"ID":"93cce62f-6d52-4afd-aa59-e2adac63a30f","Type":"ContainerStarted","Data":"4230de3987a092ffcab16d0de05eb94828d3d87cbc70ac6695a01edc4f1624fd"} Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.264072 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7674ff4df6-6crwz" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.268425 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-bbd685659-f7cgg" event={"ID":"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce","Type":"ContainerStarted","Data":"bc4beffb46ea49d66b88aebd49e45f5cfae3d80eceed9d5b495d5ac6f35b37e3"} Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.282978 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6cd65c74-7vqb5" event={"ID":"4b203558-1aea-4672-871f-d2dca324a585","Type":"ContainerStarted","Data":"2adb37d7e1e3ae4257d4557e08b6dab44a21981eeeec3209e1f7922febff5e54"} Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.321635 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7674ff4df6-6crwz" podStartSLOduration=4.321614043 podStartE2EDuration="4.321614043s" podCreationTimestamp="2025-10-08 14:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:42:51.320937456 +0000 UTC m=+1196.471872543" watchObservedRunningTime="2025-10-08 14:42:51.321614043 +0000 UTC m=+1196.472549140" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.374804 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5d66f5894-xhq8d"] Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.429885 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7ccc99c6fd-fx7s5"] Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.509395 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-775f4cbd6b-68btn"] Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.509434 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-79db6c47d5-q6dxb"] Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.510556 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.523073 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-bbd685659-f7cgg"] Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.528893 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.529108 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.545275 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-545dc78c-24gzk"] Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.569540 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-79db6c47d5-q6dxb"] Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.591311 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7ccc99c6fd-fx7s5"] Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.601136 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-c8c76b4d4-k9vfh"] Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.602628 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.615100 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.615310 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.631812 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8494534-9935-4f78-9571-b03ff870b8ac-public-tls-certs\") pod \"heat-api-79db6c47d5-q6dxb\" (UID: \"a8494534-9935-4f78-9571-b03ff870b8ac\") " pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.632003 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8494534-9935-4f78-9571-b03ff870b8ac-config-data\") pod \"heat-api-79db6c47d5-q6dxb\" (UID: \"a8494534-9935-4f78-9571-b03ff870b8ac\") " pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.632193 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8494534-9935-4f78-9571-b03ff870b8ac-combined-ca-bundle\") pod \"heat-api-79db6c47d5-q6dxb\" (UID: \"a8494534-9935-4f78-9571-b03ff870b8ac\") " pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.632373 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6qzn\" (UniqueName: \"kubernetes.io/projected/a8494534-9935-4f78-9571-b03ff870b8ac-kube-api-access-w6qzn\") pod \"heat-api-79db6c47d5-q6dxb\" (UID: \"a8494534-9935-4f78-9571-b03ff870b8ac\") " pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.632452 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8494534-9935-4f78-9571-b03ff870b8ac-config-data-custom\") pod \"heat-api-79db6c47d5-q6dxb\" (UID: \"a8494534-9935-4f78-9571-b03ff870b8ac\") " pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.632542 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8494534-9935-4f78-9571-b03ff870b8ac-internal-tls-certs\") pod \"heat-api-79db6c47d5-q6dxb\" (UID: \"a8494534-9935-4f78-9571-b03ff870b8ac\") " pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.680726 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-c8c76b4d4-k9vfh"] Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.727077 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.738590 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6qzn\" (UniqueName: \"kubernetes.io/projected/a8494534-9935-4f78-9571-b03ff870b8ac-kube-api-access-w6qzn\") pod \"heat-api-79db6c47d5-q6dxb\" (UID: \"a8494534-9935-4f78-9571-b03ff870b8ac\") " pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.739095 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8494534-9935-4f78-9571-b03ff870b8ac-config-data-custom\") pod \"heat-api-79db6c47d5-q6dxb\" (UID: \"a8494534-9935-4f78-9571-b03ff870b8ac\") " pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.739188 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwbsn\" (UniqueName: \"kubernetes.io/projected/bcf01908-e783-4491-8047-ef1053a2b87b-kube-api-access-wwbsn\") pod \"heat-cfnapi-c8c76b4d4-k9vfh\" (UID: \"bcf01908-e783-4491-8047-ef1053a2b87b\") " pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.739291 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bcf01908-e783-4491-8047-ef1053a2b87b-config-data-custom\") pod \"heat-cfnapi-c8c76b4d4-k9vfh\" (UID: \"bcf01908-e783-4491-8047-ef1053a2b87b\") " pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.739366 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8494534-9935-4f78-9571-b03ff870b8ac-internal-tls-certs\") pod \"heat-api-79db6c47d5-q6dxb\" (UID: \"a8494534-9935-4f78-9571-b03ff870b8ac\") " pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.739431 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf01908-e783-4491-8047-ef1053a2b87b-combined-ca-bundle\") pod \"heat-cfnapi-c8c76b4d4-k9vfh\" (UID: \"bcf01908-e783-4491-8047-ef1053a2b87b\") " pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.739526 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcf01908-e783-4491-8047-ef1053a2b87b-config-data\") pod \"heat-cfnapi-c8c76b4d4-k9vfh\" (UID: \"bcf01908-e783-4491-8047-ef1053a2b87b\") " pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.739595 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcf01908-e783-4491-8047-ef1053a2b87b-internal-tls-certs\") pod \"heat-cfnapi-c8c76b4d4-k9vfh\" (UID: \"bcf01908-e783-4491-8047-ef1053a2b87b\") " pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.739706 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8494534-9935-4f78-9571-b03ff870b8ac-public-tls-certs\") pod \"heat-api-79db6c47d5-q6dxb\" (UID: \"a8494534-9935-4f78-9571-b03ff870b8ac\") " pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.739801 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8494534-9935-4f78-9571-b03ff870b8ac-config-data\") pod \"heat-api-79db6c47d5-q6dxb\" (UID: \"a8494534-9935-4f78-9571-b03ff870b8ac\") " pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.739905 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcf01908-e783-4491-8047-ef1053a2b87b-public-tls-certs\") pod \"heat-cfnapi-c8c76b4d4-k9vfh\" (UID: \"bcf01908-e783-4491-8047-ef1053a2b87b\") " pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.739997 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8494534-9935-4f78-9571-b03ff870b8ac-combined-ca-bundle\") pod \"heat-api-79db6c47d5-q6dxb\" (UID: \"a8494534-9935-4f78-9571-b03ff870b8ac\") " pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.750589 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8494534-9935-4f78-9571-b03ff870b8ac-config-data-custom\") pod \"heat-api-79db6c47d5-q6dxb\" (UID: \"a8494534-9935-4f78-9571-b03ff870b8ac\") " pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.752329 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8494534-9935-4f78-9571-b03ff870b8ac-internal-tls-certs\") pod \"heat-api-79db6c47d5-q6dxb\" (UID: \"a8494534-9935-4f78-9571-b03ff870b8ac\") " pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.753339 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8494534-9935-4f78-9571-b03ff870b8ac-public-tls-certs\") pod \"heat-api-79db6c47d5-q6dxb\" (UID: \"a8494534-9935-4f78-9571-b03ff870b8ac\") " pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.768223 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8494534-9935-4f78-9571-b03ff870b8ac-config-data\") pod \"heat-api-79db6c47d5-q6dxb\" (UID: \"a8494534-9935-4f78-9571-b03ff870b8ac\") " pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.769027 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8494534-9935-4f78-9571-b03ff870b8ac-combined-ca-bundle\") pod \"heat-api-79db6c47d5-q6dxb\" (UID: \"a8494534-9935-4f78-9571-b03ff870b8ac\") " pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.771537 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6qzn\" (UniqueName: \"kubernetes.io/projected/a8494534-9935-4f78-9571-b03ff870b8ac-kube-api-access-w6qzn\") pod \"heat-api-79db6c47d5-q6dxb\" (UID: \"a8494534-9935-4f78-9571-b03ff870b8ac\") " pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.825858 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-588c6684fb-k5v22"] Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.841110 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwbsn\" (UniqueName: \"kubernetes.io/projected/bcf01908-e783-4491-8047-ef1053a2b87b-kube-api-access-wwbsn\") pod \"heat-cfnapi-c8c76b4d4-k9vfh\" (UID: \"bcf01908-e783-4491-8047-ef1053a2b87b\") " pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.841831 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bcf01908-e783-4491-8047-ef1053a2b87b-config-data-custom\") pod \"heat-cfnapi-c8c76b4d4-k9vfh\" (UID: \"bcf01908-e783-4491-8047-ef1053a2b87b\") " pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.841925 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf01908-e783-4491-8047-ef1053a2b87b-combined-ca-bundle\") pod \"heat-cfnapi-c8c76b4d4-k9vfh\" (UID: \"bcf01908-e783-4491-8047-ef1053a2b87b\") " pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.842008 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcf01908-e783-4491-8047-ef1053a2b87b-config-data\") pod \"heat-cfnapi-c8c76b4d4-k9vfh\" (UID: \"bcf01908-e783-4491-8047-ef1053a2b87b\") " pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.842182 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcf01908-e783-4491-8047-ef1053a2b87b-internal-tls-certs\") pod \"heat-cfnapi-c8c76b4d4-k9vfh\" (UID: \"bcf01908-e783-4491-8047-ef1053a2b87b\") " pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.842296 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcf01908-e783-4491-8047-ef1053a2b87b-public-tls-certs\") pod \"heat-cfnapi-c8c76b4d4-k9vfh\" (UID: \"bcf01908-e783-4491-8047-ef1053a2b87b\") " pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.878014 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-59d86bf959-vq2ld"] Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.881398 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcf01908-e783-4491-8047-ef1053a2b87b-public-tls-certs\") pod \"heat-cfnapi-c8c76b4d4-k9vfh\" (UID: \"bcf01908-e783-4491-8047-ef1053a2b87b\") " pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.881884 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bcf01908-e783-4491-8047-ef1053a2b87b-config-data-custom\") pod \"heat-cfnapi-c8c76b4d4-k9vfh\" (UID: \"bcf01908-e783-4491-8047-ef1053a2b87b\") " pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.891177 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcf01908-e783-4491-8047-ef1053a2b87b-config-data\") pod \"heat-cfnapi-c8c76b4d4-k9vfh\" (UID: \"bcf01908-e783-4491-8047-ef1053a2b87b\") " pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.891983 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf01908-e783-4491-8047-ef1053a2b87b-combined-ca-bundle\") pod \"heat-cfnapi-c8c76b4d4-k9vfh\" (UID: \"bcf01908-e783-4491-8047-ef1053a2b87b\") " pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.892789 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.909488 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcf01908-e783-4491-8047-ef1053a2b87b-internal-tls-certs\") pod \"heat-cfnapi-c8c76b4d4-k9vfh\" (UID: \"bcf01908-e783-4491-8047-ef1053a2b87b\") " pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:51 crc kubenswrapper[4624]: I1008 14:42:51.943833 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwbsn\" (UniqueName: \"kubernetes.io/projected/bcf01908-e783-4491-8047-ef1053a2b87b-kube-api-access-wwbsn\") pod \"heat-cfnapi-c8c76b4d4-k9vfh\" (UID: \"bcf01908-e783-4491-8047-ef1053a2b87b\") " pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:52 crc kubenswrapper[4624]: I1008 14:42:52.244604 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:52 crc kubenswrapper[4624]: I1008 14:42:52.311061 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5a500ae8-578e-4045-8bfb-0a658340dc09","Type":"ContainerStarted","Data":"6ab4b83b10bb6b3cdea85aec8685a762c89626eadfa874588c1f12920fc1d070"} Oct 08 14:42:52 crc kubenswrapper[4624]: I1008 14:42:52.315666 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59d86bf959-vq2ld" event={"ID":"d7c08e42-5aca-4394-952c-5649ba096a8f","Type":"ContainerStarted","Data":"41aa613836eeb390f7dc498469dff1fb1bd044122020d823ed0cb7bccb167d18"} Oct 08 14:42:52 crc kubenswrapper[4624]: I1008 14:42:52.324387 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545dc78c-24gzk" event={"ID":"47b6d75e-5f52-46b7-ad81-0efc8ae08807","Type":"ContainerStarted","Data":"728ecab953dc8e1975eb909e2666e8409694453c0455c030f52dff45bf2cb060"} Oct 08 14:42:52 crc kubenswrapper[4624]: I1008 14:42:52.327021 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d66f5894-xhq8d" event={"ID":"39ab7f72-0992-4fcb-aee5-1db66762d854","Type":"ContainerStarted","Data":"438ace51b754fcdb62e8aac945c2e9db6dadd33908363d3cb40426b0b8faf51e"} Oct 08 14:42:52 crc kubenswrapper[4624]: I1008 14:42:52.328795 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-775f4cbd6b-68btn" event={"ID":"11363bb9-1c0c-46f9-a40f-85537a193cb5","Type":"ContainerStarted","Data":"6de274e2cef253be03aaa80ce0b2718cdcfbfb0d79d2f53d4b6d776a6a4821f7"} Oct 08 14:42:52 crc kubenswrapper[4624]: I1008 14:42:52.331049 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-588c6684fb-k5v22" event={"ID":"9b419234-067e-41ef-8382-55b89cafa33a","Type":"ContainerStarted","Data":"0589f5c37fc2b02138f4d88ab91d33cd7322626627f795cc6e7fd6d82dd7749c"} Oct 08 14:42:52 crc kubenswrapper[4624]: I1008 14:42:52.335384 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7ccc99c6fd-fx7s5" event={"ID":"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80","Type":"ContainerStarted","Data":"f89dadfb7b5547d9863418f0ed6750b37b871ced2d7a5ddbf1869cea60cb1f29"} Oct 08 14:42:53 crc kubenswrapper[4624]: I1008 14:42:53.027455 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-79db6c47d5-q6dxb"] Oct 08 14:42:53 crc kubenswrapper[4624]: I1008 14:42:53.198226 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-c8c76b4d4-k9vfh"] Oct 08 14:42:53 crc kubenswrapper[4624]: W1008 14:42:53.235316 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcf01908_e783_4491_8047_ef1053a2b87b.slice/crio-7843914e30f2ad7425b2554988f554f8398f06aa6c5ccc2516c92ab0ea903e6e WatchSource:0}: Error finding container 7843914e30f2ad7425b2554988f554f8398f06aa6c5ccc2516c92ab0ea903e6e: Status 404 returned error can't find the container with id 7843914e30f2ad7425b2554988f554f8398f06aa6c5ccc2516c92ab0ea903e6e Oct 08 14:42:53 crc kubenswrapper[4624]: I1008 14:42:53.367350 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79db6c47d5-q6dxb" event={"ID":"a8494534-9935-4f78-9571-b03ff870b8ac","Type":"ContainerStarted","Data":"29ad688e012f90924badad9cff36b6b0bd8b041d1049b28841936ae352eaa5de"} Oct 08 14:42:53 crc kubenswrapper[4624]: I1008 14:42:53.369338 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-588c6684fb-k5v22" event={"ID":"9b419234-067e-41ef-8382-55b89cafa33a","Type":"ContainerStarted","Data":"862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4"} Oct 08 14:42:53 crc kubenswrapper[4624]: I1008 14:42:53.369426 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-588c6684fb-k5v22" Oct 08 14:42:53 crc kubenswrapper[4624]: I1008 14:42:53.371809 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" event={"ID":"bcf01908-e783-4491-8047-ef1053a2b87b","Type":"ContainerStarted","Data":"7843914e30f2ad7425b2554988f554f8398f06aa6c5ccc2516c92ab0ea903e6e"} Oct 08 14:42:53 crc kubenswrapper[4624]: I1008 14:42:53.379060 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59d86bf959-vq2ld" event={"ID":"d7c08e42-5aca-4394-952c-5649ba096a8f","Type":"ContainerStarted","Data":"fba28e9ea3d8695c81b56aef31f56da5c5b226d2e3ddc441ab93dae92c6ad06d"} Oct 08 14:42:53 crc kubenswrapper[4624]: I1008 14:42:53.390432 4624 generic.go:334] "Generic (PLEG): container finished" podID="47b6d75e-5f52-46b7-ad81-0efc8ae08807" containerID="3770c773277619224ad2fb196cebc40641a6445d859832e277661ddd7c03b540" exitCode=0 Oct 08 14:42:53 crc kubenswrapper[4624]: I1008 14:42:53.390546 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545dc78c-24gzk" event={"ID":"47b6d75e-5f52-46b7-ad81-0efc8ae08807","Type":"ContainerDied","Data":"3770c773277619224ad2fb196cebc40641a6445d859832e277661ddd7c03b540"} Oct 08 14:42:53 crc kubenswrapper[4624]: I1008 14:42:53.438058 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-588c6684fb-k5v22" podStartSLOduration=13.438034441 podStartE2EDuration="13.438034441s" podCreationTimestamp="2025-10-08 14:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:42:53.389430773 +0000 UTC m=+1198.540365940" watchObservedRunningTime="2025-10-08 14:42:53.438034441 +0000 UTC m=+1198.588969528" Oct 08 14:42:54 crc kubenswrapper[4624]: I1008 14:42:54.425663 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545dc78c-24gzk" event={"ID":"47b6d75e-5f52-46b7-ad81-0efc8ae08807","Type":"ContainerStarted","Data":"808dc982a8094fb2eaf3059e7f6204fb3f2d70021c7a1d6e341e12fa6d10d77c"} Oct 08 14:42:54 crc kubenswrapper[4624]: I1008 14:42:54.426253 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:42:54 crc kubenswrapper[4624]: I1008 14:42:54.428455 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5a500ae8-578e-4045-8bfb-0a658340dc09","Type":"ContainerStarted","Data":"23f9f0c114485792eced37bda7a87edaf191e6e189b0c2fac1d60cebfe4eba46"} Oct 08 14:42:54 crc kubenswrapper[4624]: I1008 14:42:54.447731 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59d86bf959-vq2ld" event={"ID":"d7c08e42-5aca-4394-952c-5649ba096a8f","Type":"ContainerStarted","Data":"b5e92afc2fdfe23cb0acc7eb70b0e68716faff65a14c1063f657664d7fe356b2"} Oct 08 14:42:54 crc kubenswrapper[4624]: I1008 14:42:54.447792 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:54 crc kubenswrapper[4624]: I1008 14:42:54.448766 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:42:54 crc kubenswrapper[4624]: I1008 14:42:54.459145 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-545dc78c-24gzk" podStartSLOduration=14.459109698 podStartE2EDuration="14.459109698s" podCreationTimestamp="2025-10-08 14:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:42:54.447107005 +0000 UTC m=+1199.598042082" watchObservedRunningTime="2025-10-08 14:42:54.459109698 +0000 UTC m=+1199.610044795" Oct 08 14:42:54 crc kubenswrapper[4624]: I1008 14:42:54.493598 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-59d86bf959-vq2ld" podStartSLOduration=8.493576068 podStartE2EDuration="8.493576068s" podCreationTimestamp="2025-10-08 14:42:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:42:54.482806746 +0000 UTC m=+1199.633741823" watchObservedRunningTime="2025-10-08 14:42:54.493576068 +0000 UTC m=+1199.644511145" Oct 08 14:42:54 crc kubenswrapper[4624]: I1008 14:42:54.611103 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:42:54 crc kubenswrapper[4624]: I1008 14:42:54.611410 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:42:54 crc kubenswrapper[4624]: I1008 14:42:54.719724 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:42:54 crc kubenswrapper[4624]: I1008 14:42:54.720635 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:42:55 crc kubenswrapper[4624]: I1008 14:42:55.540982 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5a500ae8-578e-4045-8bfb-0a658340dc09","Type":"ContainerStarted","Data":"ff378367dbb4210722310372cf2fd3c31443b0c75d5644b674ad0d86a6aa1054"} Oct 08 14:42:55 crc kubenswrapper[4624]: I1008 14:42:55.802735 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=17.80271753 podStartE2EDuration="17.80271753s" podCreationTimestamp="2025-10-08 14:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:42:55.790089051 +0000 UTC m=+1200.941024138" watchObservedRunningTime="2025-10-08 14:42:55.80271753 +0000 UTC m=+1200.953652607" Oct 08 14:42:56 crc kubenswrapper[4624]: I1008 14:42:56.340910 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:42:56 crc kubenswrapper[4624]: I1008 14:42:56.341248 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerName="ceilometer-central-agent" containerID="cri-o://e1c4508a574487d5b1c5fe0490ed9d056141b31f3a9ac9e87ea1f331de030a3f" gracePeriod=30 Oct 08 14:42:56 crc kubenswrapper[4624]: I1008 14:42:56.341318 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerName="proxy-httpd" containerID="cri-o://9112550b393920691cb1de6b367b640cc325b5905f197e819aa4d244c7d15d3f" gracePeriod=30 Oct 08 14:42:56 crc kubenswrapper[4624]: I1008 14:42:56.341386 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerName="ceilometer-notification-agent" containerID="cri-o://790808c62c0ac26e00901261625803bad40fd27dbf3fe9381129a5ef33c87813" gracePeriod=30 Oct 08 14:42:56 crc kubenswrapper[4624]: I1008 14:42:56.341367 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerName="sg-core" containerID="cri-o://56a00c1abe8cde54f32d7df775489195aaf76bd6db06b4dcf0aadeaf89658409" gracePeriod=30 Oct 08 14:42:56 crc kubenswrapper[4624]: I1008 14:42:56.407361 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 14:42:56 crc kubenswrapper[4624]: I1008 14:42:56.407827 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="65ff7a00-cdff-4601-b7c5-31e1d271cfbb" containerName="kube-state-metrics" containerID="cri-o://1b1a577f37721e32b723cf4bf7f834ac15a4fee7bd9748caff127e4bf3c219d6" gracePeriod=30 Oct 08 14:42:57 crc kubenswrapper[4624]: I1008 14:42:57.185200 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="65ff7a00-cdff-4601-b7c5-31e1d271cfbb" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": dial tcp 10.217.0.105:8081: connect: connection refused" Oct 08 14:42:57 crc kubenswrapper[4624]: I1008 14:42:57.566238 4624 generic.go:334] "Generic (PLEG): container finished" podID="65ff7a00-cdff-4601-b7c5-31e1d271cfbb" containerID="1b1a577f37721e32b723cf4bf7f834ac15a4fee7bd9748caff127e4bf3c219d6" exitCode=2 Oct 08 14:42:57 crc kubenswrapper[4624]: I1008 14:42:57.566456 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"65ff7a00-cdff-4601-b7c5-31e1d271cfbb","Type":"ContainerDied","Data":"1b1a577f37721e32b723cf4bf7f834ac15a4fee7bd9748caff127e4bf3c219d6"} Oct 08 14:42:57 crc kubenswrapper[4624]: I1008 14:42:57.582963 4624 generic.go:334] "Generic (PLEG): container finished" podID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerID="9112550b393920691cb1de6b367b640cc325b5905f197e819aa4d244c7d15d3f" exitCode=0 Oct 08 14:42:57 crc kubenswrapper[4624]: I1008 14:42:57.582998 4624 generic.go:334] "Generic (PLEG): container finished" podID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerID="56a00c1abe8cde54f32d7df775489195aaf76bd6db06b4dcf0aadeaf89658409" exitCode=2 Oct 08 14:42:57 crc kubenswrapper[4624]: I1008 14:42:57.583005 4624 generic.go:334] "Generic (PLEG): container finished" podID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerID="e1c4508a574487d5b1c5fe0490ed9d056141b31f3a9ac9e87ea1f331de030a3f" exitCode=0 Oct 08 14:42:57 crc kubenswrapper[4624]: I1008 14:42:57.583023 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"551042f0-9ae9-49b7-ab92-e1b775c7e742","Type":"ContainerDied","Data":"9112550b393920691cb1de6b367b640cc325b5905f197e819aa4d244c7d15d3f"} Oct 08 14:42:57 crc kubenswrapper[4624]: I1008 14:42:57.583048 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"551042f0-9ae9-49b7-ab92-e1b775c7e742","Type":"ContainerDied","Data":"56a00c1abe8cde54f32d7df775489195aaf76bd6db06b4dcf0aadeaf89658409"} Oct 08 14:42:57 crc kubenswrapper[4624]: I1008 14:42:57.583057 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"551042f0-9ae9-49b7-ab92-e1b775c7e742","Type":"ContainerDied","Data":"e1c4508a574487d5b1c5fe0490ed9d056141b31f3a9ac9e87ea1f331de030a3f"} Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.440579 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.505306 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2lv5\" (UniqueName: \"kubernetes.io/projected/65ff7a00-cdff-4601-b7c5-31e1d271cfbb-kube-api-access-t2lv5\") pod \"65ff7a00-cdff-4601-b7c5-31e1d271cfbb\" (UID: \"65ff7a00-cdff-4601-b7c5-31e1d271cfbb\") " Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.516902 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65ff7a00-cdff-4601-b7c5-31e1d271cfbb-kube-api-access-t2lv5" (OuterVolumeSpecName: "kube-api-access-t2lv5") pod "65ff7a00-cdff-4601-b7c5-31e1d271cfbb" (UID: "65ff7a00-cdff-4601-b7c5-31e1d271cfbb"). InnerVolumeSpecName "kube-api-access-t2lv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.613469 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2lv5\" (UniqueName: \"kubernetes.io/projected/65ff7a00-cdff-4601-b7c5-31e1d271cfbb-kube-api-access-t2lv5\") on node \"crc\" DevicePath \"\"" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.624112 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"65ff7a00-cdff-4601-b7c5-31e1d271cfbb","Type":"ContainerDied","Data":"076d4507cbb6caca93f6566f159feb56920de45d11a546f82ebc11a8e04743ce"} Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.624169 4624 scope.go:117] "RemoveContainer" containerID="1b1a577f37721e32b723cf4bf7f834ac15a4fee7bd9748caff127e4bf3c219d6" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.624397 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.677302 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.707719 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.722433 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 14:42:58 crc kubenswrapper[4624]: E1008 14:42:58.722903 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ff7a00-cdff-4601-b7c5-31e1d271cfbb" containerName="kube-state-metrics" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.722921 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ff7a00-cdff-4601-b7c5-31e1d271cfbb" containerName="kube-state-metrics" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.723142 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ff7a00-cdff-4601-b7c5-31e1d271cfbb" containerName="kube-state-metrics" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.723858 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.727708 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.727905 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.736313 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.820199 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkzgx\" (UniqueName: \"kubernetes.io/projected/ee24950a-af9c-4e5f-ab36-66c3c5a9cf66-kube-api-access-wkzgx\") pod \"kube-state-metrics-0\" (UID: \"ee24950a-af9c-4e5f-ab36-66c3c5a9cf66\") " pod="openstack/kube-state-metrics-0" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.820286 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee24950a-af9c-4e5f-ab36-66c3c5a9cf66-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ee24950a-af9c-4e5f-ab36-66c3c5a9cf66\") " pod="openstack/kube-state-metrics-0" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.820323 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee24950a-af9c-4e5f-ab36-66c3c5a9cf66-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ee24950a-af9c-4e5f-ab36-66c3c5a9cf66\") " pod="openstack/kube-state-metrics-0" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.820351 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ee24950a-af9c-4e5f-ab36-66c3c5a9cf66-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ee24950a-af9c-4e5f-ab36-66c3c5a9cf66\") " pod="openstack/kube-state-metrics-0" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.820584 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.825736 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="5a500ae8-578e-4045-8bfb-0a658340dc09" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.173:8080/\": dial tcp 10.217.0.173:8080: connect: connection refused" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.921657 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ee24950a-af9c-4e5f-ab36-66c3c5a9cf66-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ee24950a-af9c-4e5f-ab36-66c3c5a9cf66\") " pod="openstack/kube-state-metrics-0" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.922233 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkzgx\" (UniqueName: \"kubernetes.io/projected/ee24950a-af9c-4e5f-ab36-66c3c5a9cf66-kube-api-access-wkzgx\") pod \"kube-state-metrics-0\" (UID: \"ee24950a-af9c-4e5f-ab36-66c3c5a9cf66\") " pod="openstack/kube-state-metrics-0" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.922400 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee24950a-af9c-4e5f-ab36-66c3c5a9cf66-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ee24950a-af9c-4e5f-ab36-66c3c5a9cf66\") " pod="openstack/kube-state-metrics-0" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.922513 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee24950a-af9c-4e5f-ab36-66c3c5a9cf66-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ee24950a-af9c-4e5f-ab36-66c3c5a9cf66\") " pod="openstack/kube-state-metrics-0" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.927939 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee24950a-af9c-4e5f-ab36-66c3c5a9cf66-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ee24950a-af9c-4e5f-ab36-66c3c5a9cf66\") " pod="openstack/kube-state-metrics-0" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.928760 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee24950a-af9c-4e5f-ab36-66c3c5a9cf66-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ee24950a-af9c-4e5f-ab36-66c3c5a9cf66\") " pod="openstack/kube-state-metrics-0" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.936215 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ee24950a-af9c-4e5f-ab36-66c3c5a9cf66-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ee24950a-af9c-4e5f-ab36-66c3c5a9cf66\") " pod="openstack/kube-state-metrics-0" Oct 08 14:42:58 crc kubenswrapper[4624]: I1008 14:42:58.969349 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkzgx\" (UniqueName: \"kubernetes.io/projected/ee24950a-af9c-4e5f-ab36-66c3c5a9cf66-kube-api-access-wkzgx\") pod \"kube-state-metrics-0\" (UID: \"ee24950a-af9c-4e5f-ab36-66c3c5a9cf66\") " pod="openstack/kube-state-metrics-0" Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.067112 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.505807 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65ff7a00-cdff-4601-b7c5-31e1d271cfbb" path="/var/lib/kubelet/pods/65ff7a00-cdff-4601-b7c5-31e1d271cfbb/volumes" Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.650070 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79db6c47d5-q6dxb" event={"ID":"a8494534-9935-4f78-9571-b03ff870b8ac","Type":"ContainerStarted","Data":"1a9f0f02d6bb3444899e6b79d4b379802aad721e9fb33a4e3c13cf3307a91dab"} Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.651840 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.659702 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-775f4cbd6b-68btn" event={"ID":"11363bb9-1c0c-46f9-a40f-85537a193cb5","Type":"ContainerStarted","Data":"715888c5d7751638721e9d11da130e9f9caba642ffbc3fb0346c01213b9ce0d0"} Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.659864 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.666102 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" event={"ID":"bcf01908-e783-4491-8047-ef1053a2b87b","Type":"ContainerStarted","Data":"dfa790bdc09f813c7d5d5512a76fc2099ad26dc65ccfb461e1f505c6ed6ec416"} Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.666155 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.683201 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7ccc99c6fd-fx7s5" event={"ID":"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80","Type":"ContainerStarted","Data":"41f13ee66d49ac4b081eb7ec0d2a83fe70dce222c95cd44c98214a4d4d7e7b16"} Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.683378 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7ccc99c6fd-fx7s5" podUID="bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80" containerName="heat-api" containerID="cri-o://41f13ee66d49ac4b081eb7ec0d2a83fe70dce222c95cd44c98214a4d4d7e7b16" gracePeriod=60 Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.683705 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7ccc99c6fd-fx7s5" Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.703172 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-79db6c47d5-q6dxb" podStartSLOduration=3.587735676 podStartE2EDuration="8.703147582s" podCreationTimestamp="2025-10-08 14:42:51 +0000 UTC" firstStartedPulling="2025-10-08 14:42:53.158266976 +0000 UTC m=+1198.309202053" lastFinishedPulling="2025-10-08 14:42:58.273678882 +0000 UTC m=+1203.424613959" observedRunningTime="2025-10-08 14:42:59.675888424 +0000 UTC m=+1204.826823511" watchObservedRunningTime="2025-10-08 14:42:59.703147582 +0000 UTC m=+1204.854082659" Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.712716 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-775f4cbd6b-68btn" podStartSLOduration=5.967085148 podStartE2EDuration="12.712698243s" podCreationTimestamp="2025-10-08 14:42:47 +0000 UTC" firstStartedPulling="2025-10-08 14:42:51.511372535 +0000 UTC m=+1196.662307612" lastFinishedPulling="2025-10-08 14:42:58.25698563 +0000 UTC m=+1203.407920707" observedRunningTime="2025-10-08 14:42:59.698146816 +0000 UTC m=+1204.849081893" watchObservedRunningTime="2025-10-08 14:42:59.712698243 +0000 UTC m=+1204.863633320" Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.716259 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-bbd685659-f7cgg" event={"ID":"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce","Type":"ContainerStarted","Data":"dc34541c4b7a650c6ba8f5fcd97f00da04a212bf703cc5e1fb3dd27701cff9f0"} Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.716412 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-bbd685659-f7cgg" podUID="e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce" containerName="heat-cfnapi" containerID="cri-o://dc34541c4b7a650c6ba8f5fcd97f00da04a212bf703cc5e1fb3dd27701cff9f0" gracePeriod=60 Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.716667 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-bbd685659-f7cgg" Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.738453 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d66f5894-xhq8d" event={"ID":"39ab7f72-0992-4fcb-aee5-1db66762d854","Type":"ContainerStarted","Data":"65b3d94e1731bd0333315fd6456c9d69df79ba6b5e5c9f1610b19320800c65c2"} Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.738513 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.825357 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" podStartSLOduration=3.784078434 podStartE2EDuration="8.825333177s" podCreationTimestamp="2025-10-08 14:42:51 +0000 UTC" firstStartedPulling="2025-10-08 14:42:53.242702638 +0000 UTC m=+1198.393637715" lastFinishedPulling="2025-10-08 14:42:58.283957381 +0000 UTC m=+1203.434892458" observedRunningTime="2025-10-08 14:42:59.71694046 +0000 UTC m=+1204.867875537" watchObservedRunningTime="2025-10-08 14:42:59.825333177 +0000 UTC m=+1204.976268254" Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.835405 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7ccc99c6fd-fx7s5" podStartSLOduration=13.119966579 podStartE2EDuration="19.835384231s" podCreationTimestamp="2025-10-08 14:42:40 +0000 UTC" firstStartedPulling="2025-10-08 14:42:51.54481487 +0000 UTC m=+1196.695749947" lastFinishedPulling="2025-10-08 14:42:58.260232522 +0000 UTC m=+1203.411167599" observedRunningTime="2025-10-08 14:42:59.740159827 +0000 UTC m=+1204.891094904" watchObservedRunningTime="2025-10-08 14:42:59.835384231 +0000 UTC m=+1204.986319308" Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.867612 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-bbd685659-f7cgg" podStartSLOduration=11.420490292 podStartE2EDuration="18.867565103s" podCreationTimestamp="2025-10-08 14:42:41 +0000 UTC" firstStartedPulling="2025-10-08 14:42:50.843910738 +0000 UTC m=+1195.994845825" lastFinishedPulling="2025-10-08 14:42:58.290985559 +0000 UTC m=+1203.441920636" observedRunningTime="2025-10-08 14:42:59.759984587 +0000 UTC m=+1204.910919664" watchObservedRunningTime="2025-10-08 14:42:59.867565103 +0000 UTC m=+1205.018500200" Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.893709 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5d66f5894-xhq8d" podStartSLOduration=6.071473104 podStartE2EDuration="12.893689953s" podCreationTimestamp="2025-10-08 14:42:47 +0000 UTC" firstStartedPulling="2025-10-08 14:42:51.416287764 +0000 UTC m=+1196.567222841" lastFinishedPulling="2025-10-08 14:42:58.238504613 +0000 UTC m=+1203.389439690" observedRunningTime="2025-10-08 14:42:59.816454633 +0000 UTC m=+1204.967389710" watchObservedRunningTime="2025-10-08 14:42:59.893689953 +0000 UTC m=+1205.044625030" Oct 08 14:42:59 crc kubenswrapper[4624]: I1008 14:42:59.894698 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.075980 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.076383 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.076434 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.077308 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"480254b7c9a360984d529299bb00cc5c3bed986a85e8add5205713881a71a8d6"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.077384 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://480254b7c9a360984d529299bb00cc5c3bed986a85e8add5205713881a71a8d6" gracePeriod=600 Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.737652 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.792874 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-scripts\") pod \"551042f0-9ae9-49b7-ab92-e1b775c7e742\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.793306 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/551042f0-9ae9-49b7-ab92-e1b775c7e742-run-httpd\") pod \"551042f0-9ae9-49b7-ab92-e1b775c7e742\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.793346 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/551042f0-9ae9-49b7-ab92-e1b775c7e742-log-httpd\") pod \"551042f0-9ae9-49b7-ab92-e1b775c7e742\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.793391 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-sg-core-conf-yaml\") pod \"551042f0-9ae9-49b7-ab92-e1b775c7e742\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.793432 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-config-data\") pod \"551042f0-9ae9-49b7-ab92-e1b775c7e742\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.793451 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27bn4\" (UniqueName: \"kubernetes.io/projected/551042f0-9ae9-49b7-ab92-e1b775c7e742-kube-api-access-27bn4\") pod \"551042f0-9ae9-49b7-ab92-e1b775c7e742\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.793510 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-combined-ca-bundle\") pod \"551042f0-9ae9-49b7-ab92-e1b775c7e742\" (UID: \"551042f0-9ae9-49b7-ab92-e1b775c7e742\") " Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.801607 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/551042f0-9ae9-49b7-ab92-e1b775c7e742-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "551042f0-9ae9-49b7-ab92-e1b775c7e742" (UID: "551042f0-9ae9-49b7-ab92-e1b775c7e742"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.818782 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/551042f0-9ae9-49b7-ab92-e1b775c7e742-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "551042f0-9ae9-49b7-ab92-e1b775c7e742" (UID: "551042f0-9ae9-49b7-ab92-e1b775c7e742"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.853845 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-scripts" (OuterVolumeSpecName: "scripts") pod "551042f0-9ae9-49b7-ab92-e1b775c7e742" (UID: "551042f0-9ae9-49b7-ab92-e1b775c7e742"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.878663 4624 generic.go:334] "Generic (PLEG): container finished" podID="11363bb9-1c0c-46f9-a40f-85537a193cb5" containerID="715888c5d7751638721e9d11da130e9f9caba642ffbc3fb0346c01213b9ce0d0" exitCode=1 Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.879100 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-775f4cbd6b-68btn" event={"ID":"11363bb9-1c0c-46f9-a40f-85537a193cb5","Type":"ContainerDied","Data":"715888c5d7751638721e9d11da130e9f9caba642ffbc3fb0346c01213b9ce0d0"} Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.880537 4624 scope.go:117] "RemoveContainer" containerID="715888c5d7751638721e9d11da130e9f9caba642ffbc3fb0346c01213b9ce0d0" Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.882880 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/551042f0-9ae9-49b7-ab92-e1b775c7e742-kube-api-access-27bn4" (OuterVolumeSpecName: "kube-api-access-27bn4") pod "551042f0-9ae9-49b7-ab92-e1b775c7e742" (UID: "551042f0-9ae9-49b7-ab92-e1b775c7e742"). InnerVolumeSpecName "kube-api-access-27bn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.890654 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ee24950a-af9c-4e5f-ab36-66c3c5a9cf66","Type":"ContainerStarted","Data":"8a77afc2bcac471a25c1027968d182cc6298315e60479d2c0feba97997229204"} Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.892624 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "551042f0-9ae9-49b7-ab92-e1b775c7e742" (UID: "551042f0-9ae9-49b7-ab92-e1b775c7e742"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.898963 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.898998 4624 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/551042f0-9ae9-49b7-ab92-e1b775c7e742-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.899008 4624 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/551042f0-9ae9-49b7-ab92-e1b775c7e742-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.899015 4624 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.899027 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27bn4\" (UniqueName: \"kubernetes.io/projected/551042f0-9ae9-49b7-ab92-e1b775c7e742-kube-api-access-27bn4\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.926334 4624 generic.go:334] "Generic (PLEG): container finished" podID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerID="790808c62c0ac26e00901261625803bad40fd27dbf3fe9381129a5ef33c87813" exitCode=0 Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.926435 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"551042f0-9ae9-49b7-ab92-e1b775c7e742","Type":"ContainerDied","Data":"790808c62c0ac26e00901261625803bad40fd27dbf3fe9381129a5ef33c87813"} Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.926470 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"551042f0-9ae9-49b7-ab92-e1b775c7e742","Type":"ContainerDied","Data":"2e00ae1f7df00362e2e5a2b0eb102635dc3c3ab72837429777876eb1bc74e929"} Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.926494 4624 scope.go:117] "RemoveContainer" containerID="9112550b393920691cb1de6b367b640cc325b5905f197e819aa4d244c7d15d3f" Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.926593 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.969158 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="480254b7c9a360984d529299bb00cc5c3bed986a85e8add5205713881a71a8d6" exitCode=0 Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.969357 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"480254b7c9a360984d529299bb00cc5c3bed986a85e8add5205713881a71a8d6"} Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.969395 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"b715151b06e81079e843f1e4731eae829cdb61d676150f9179acbd18ff769765"} Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.977749 4624 generic.go:334] "Generic (PLEG): container finished" podID="39ab7f72-0992-4fcb-aee5-1db66762d854" containerID="65b3d94e1731bd0333315fd6456c9d69df79ba6b5e5c9f1610b19320800c65c2" exitCode=1 Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.978804 4624 scope.go:117] "RemoveContainer" containerID="65b3d94e1731bd0333315fd6456c9d69df79ba6b5e5c9f1610b19320800c65c2" Oct 08 14:43:00 crc kubenswrapper[4624]: I1008 14:43:00.979092 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d66f5894-xhq8d" event={"ID":"39ab7f72-0992-4fcb-aee5-1db66762d854","Type":"ContainerDied","Data":"65b3d94e1731bd0333315fd6456c9d69df79ba6b5e5c9f1610b19320800c65c2"} Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.000931 4624 scope.go:117] "RemoveContainer" containerID="56a00c1abe8cde54f32d7df775489195aaf76bd6db06b4dcf0aadeaf89658409" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.133149 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "551042f0-9ae9-49b7-ab92-e1b775c7e742" (UID: "551042f0-9ae9-49b7-ab92-e1b775c7e742"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.196344 4624 scope.go:117] "RemoveContainer" containerID="790808c62c0ac26e00901261625803bad40fd27dbf3fe9381129a5ef33c87813" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.208156 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.222530 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-config-data" (OuterVolumeSpecName: "config-data") pod "551042f0-9ae9-49b7-ab92-e1b775c7e742" (UID: "551042f0-9ae9-49b7-ab92-e1b775c7e742"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.313079 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551042f0-9ae9-49b7-ab92-e1b775c7e742-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.358831 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.378174 4624 scope.go:117] "RemoveContainer" containerID="e1c4508a574487d5b1c5fe0490ed9d056141b31f3a9ac9e87ea1f331de030a3f" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.412863 4624 scope.go:117] "RemoveContainer" containerID="9112550b393920691cb1de6b367b640cc325b5905f197e819aa4d244c7d15d3f" Oct 08 14:43:01 crc kubenswrapper[4624]: E1008 14:43:01.413933 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9112550b393920691cb1de6b367b640cc325b5905f197e819aa4d244c7d15d3f\": container with ID starting with 9112550b393920691cb1de6b367b640cc325b5905f197e819aa4d244c7d15d3f not found: ID does not exist" containerID="9112550b393920691cb1de6b367b640cc325b5905f197e819aa4d244c7d15d3f" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.413966 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9112550b393920691cb1de6b367b640cc325b5905f197e819aa4d244c7d15d3f"} err="failed to get container status \"9112550b393920691cb1de6b367b640cc325b5905f197e819aa4d244c7d15d3f\": rpc error: code = NotFound desc = could not find container \"9112550b393920691cb1de6b367b640cc325b5905f197e819aa4d244c7d15d3f\": container with ID starting with 9112550b393920691cb1de6b367b640cc325b5905f197e819aa4d244c7d15d3f not found: ID does not exist" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.413990 4624 scope.go:117] "RemoveContainer" containerID="56a00c1abe8cde54f32d7df775489195aaf76bd6db06b4dcf0aadeaf89658409" Oct 08 14:43:01 crc kubenswrapper[4624]: E1008 14:43:01.416032 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56a00c1abe8cde54f32d7df775489195aaf76bd6db06b4dcf0aadeaf89658409\": container with ID starting with 56a00c1abe8cde54f32d7df775489195aaf76bd6db06b4dcf0aadeaf89658409 not found: ID does not exist" containerID="56a00c1abe8cde54f32d7df775489195aaf76bd6db06b4dcf0aadeaf89658409" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.416067 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a00c1abe8cde54f32d7df775489195aaf76bd6db06b4dcf0aadeaf89658409"} err="failed to get container status \"56a00c1abe8cde54f32d7df775489195aaf76bd6db06b4dcf0aadeaf89658409\": rpc error: code = NotFound desc = could not find container \"56a00c1abe8cde54f32d7df775489195aaf76bd6db06b4dcf0aadeaf89658409\": container with ID starting with 56a00c1abe8cde54f32d7df775489195aaf76bd6db06b4dcf0aadeaf89658409 not found: ID does not exist" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.416089 4624 scope.go:117] "RemoveContainer" containerID="790808c62c0ac26e00901261625803bad40fd27dbf3fe9381129a5ef33c87813" Oct 08 14:43:01 crc kubenswrapper[4624]: E1008 14:43:01.420784 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"790808c62c0ac26e00901261625803bad40fd27dbf3fe9381129a5ef33c87813\": container with ID starting with 790808c62c0ac26e00901261625803bad40fd27dbf3fe9381129a5ef33c87813 not found: ID does not exist" containerID="790808c62c0ac26e00901261625803bad40fd27dbf3fe9381129a5ef33c87813" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.420828 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"790808c62c0ac26e00901261625803bad40fd27dbf3fe9381129a5ef33c87813"} err="failed to get container status \"790808c62c0ac26e00901261625803bad40fd27dbf3fe9381129a5ef33c87813\": rpc error: code = NotFound desc = could not find container \"790808c62c0ac26e00901261625803bad40fd27dbf3fe9381129a5ef33c87813\": container with ID starting with 790808c62c0ac26e00901261625803bad40fd27dbf3fe9381129a5ef33c87813 not found: ID does not exist" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.420857 4624 scope.go:117] "RemoveContainer" containerID="e1c4508a574487d5b1c5fe0490ed9d056141b31f3a9ac9e87ea1f331de030a3f" Oct 08 14:43:01 crc kubenswrapper[4624]: E1008 14:43:01.424474 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1c4508a574487d5b1c5fe0490ed9d056141b31f3a9ac9e87ea1f331de030a3f\": container with ID starting with e1c4508a574487d5b1c5fe0490ed9d056141b31f3a9ac9e87ea1f331de030a3f not found: ID does not exist" containerID="e1c4508a574487d5b1c5fe0490ed9d056141b31f3a9ac9e87ea1f331de030a3f" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.424524 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1c4508a574487d5b1c5fe0490ed9d056141b31f3a9ac9e87ea1f331de030a3f"} err="failed to get container status \"e1c4508a574487d5b1c5fe0490ed9d056141b31f3a9ac9e87ea1f331de030a3f\": rpc error: code = NotFound desc = could not find container \"e1c4508a574487d5b1c5fe0490ed9d056141b31f3a9ac9e87ea1f331de030a3f\": container with ID starting with e1c4508a574487d5b1c5fe0490ed9d056141b31f3a9ac9e87ea1f331de030a3f not found: ID does not exist" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.424553 4624 scope.go:117] "RemoveContainer" containerID="f4f136ef4518c5e931bf27ca2c72cf8717267538a0000a0bb2738bfc1c0e8cda" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.425983 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.433365 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.465142 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd4749fc-jpfh9"] Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.465746 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" podUID="c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4" containerName="dnsmasq-dns" containerID="cri-o://0f262eb10d189b691295cc4557a570209d66fc8288bcf014ffb360c98c24d295" gracePeriod=10 Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.506918 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="551042f0-9ae9-49b7-ab92-e1b775c7e742" path="/var/lib/kubelet/pods/551042f0-9ae9-49b7-ab92-e1b775c7e742/volumes" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.522499 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:01 crc kubenswrapper[4624]: E1008 14:43:01.522945 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerName="ceilometer-central-agent" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.522958 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerName="ceilometer-central-agent" Oct 08 14:43:01 crc kubenswrapper[4624]: E1008 14:43:01.522970 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerName="ceilometer-notification-agent" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.522976 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerName="ceilometer-notification-agent" Oct 08 14:43:01 crc kubenswrapper[4624]: E1008 14:43:01.523001 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerName="proxy-httpd" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.523006 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerName="proxy-httpd" Oct 08 14:43:01 crc kubenswrapper[4624]: E1008 14:43:01.523021 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerName="sg-core" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.523027 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerName="sg-core" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.523221 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerName="proxy-httpd" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.523230 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerName="sg-core" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.523249 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerName="ceilometer-central-agent" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.523258 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="551042f0-9ae9-49b7-ab92-e1b775c7e742" containerName="ceilometer-notification-agent" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.662262 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.662438 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.677532 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.678394 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.721395 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99333548-c18f-4543-8e0e-d99cd2c0b968-run-httpd\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.721500 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-config-data\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.721553 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fn8l\" (UniqueName: \"kubernetes.io/projected/99333548-c18f-4543-8e0e-d99cd2c0b968-kube-api-access-2fn8l\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.721627 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.721694 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-scripts\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.721764 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99333548-c18f-4543-8e0e-d99cd2c0b968-log-httpd\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.721788 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.823924 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99333548-c18f-4543-8e0e-d99cd2c0b968-log-httpd\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.823977 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.824017 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99333548-c18f-4543-8e0e-d99cd2c0b968-run-httpd\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.824129 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-config-data\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.824204 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fn8l\" (UniqueName: \"kubernetes.io/projected/99333548-c18f-4543-8e0e-d99cd2c0b968-kube-api-access-2fn8l\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.824299 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.824355 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-scripts\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.825424 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99333548-c18f-4543-8e0e-d99cd2c0b968-run-httpd\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.825686 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99333548-c18f-4543-8e0e-d99cd2c0b968-log-httpd\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.833102 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.833562 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.838893 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-config-data\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.850255 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-scripts\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:01 crc kubenswrapper[4624]: I1008 14:43:01.851316 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fn8l\" (UniqueName: \"kubernetes.io/projected/99333548-c18f-4543-8e0e-d99cd2c0b968-kube-api-access-2fn8l\") pod \"ceilometer-0\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " pod="openstack/ceilometer-0" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.000372 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.001773 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-775f4cbd6b-68btn" event={"ID":"11363bb9-1c0c-46f9-a40f-85537a193cb5","Type":"ContainerStarted","Data":"93bb0f2c9889eeafd811040da77bbf4ffdd683b0ad8749debecf2335133125e2"} Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.002536 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.039685 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ee24950a-af9c-4e5f-ab36-66c3c5a9cf66","Type":"ContainerStarted","Data":"ce0f25858f13c6e1f640ab31f40632484ed506a67240f261c4e237cc734ce73f"} Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.040913 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.048405 4624 generic.go:334] "Generic (PLEG): container finished" podID="c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4" containerID="0f262eb10d189b691295cc4557a570209d66fc8288bcf014ffb360c98c24d295" exitCode=0 Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.048459 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" event={"ID":"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4","Type":"ContainerDied","Data":"0f262eb10d189b691295cc4557a570209d66fc8288bcf014ffb360c98c24d295"} Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.055655 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d66f5894-xhq8d" event={"ID":"39ab7f72-0992-4fcb-aee5-1db66762d854","Type":"ContainerStarted","Data":"89edd3840b7d46280e4927c0b64463b300e05f30937fc76b68c9b3b85764438c"} Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.055692 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.059510 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.139870 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-59d86bf959-vq2ld" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.159837 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.717226655 podStartE2EDuration="4.159813523s" podCreationTimestamp="2025-10-08 14:42:58 +0000 UTC" firstStartedPulling="2025-10-08 14:42:59.83061899 +0000 UTC m=+1204.981554057" lastFinishedPulling="2025-10-08 14:43:00.273205848 +0000 UTC m=+1205.424140925" observedRunningTime="2025-10-08 14:43:02.058581577 +0000 UTC m=+1207.209516664" watchObservedRunningTime="2025-10-08 14:43:02.159813523 +0000 UTC m=+1207.310748600" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.431526 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.446450 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-dns-svc\") pod \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.446579 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-dns-swift-storage-0\") pod \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.446614 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bn7d\" (UniqueName: \"kubernetes.io/projected/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-kube-api-access-6bn7d\") pod \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.446654 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-ovsdbserver-nb\") pod \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.446699 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-ovsdbserver-sb\") pod \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.447504 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-config\") pod \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\" (UID: \"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4\") " Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.457673 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-kube-api-access-6bn7d" (OuterVolumeSpecName: "kube-api-access-6bn7d") pod "c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4" (UID: "c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4"). InnerVolumeSpecName "kube-api-access-6bn7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.556319 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bn7d\" (UniqueName: \"kubernetes.io/projected/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-kube-api-access-6bn7d\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.633495 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4" (UID: "c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.640984 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4" (UID: "c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.641414 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4" (UID: "c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.641752 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4" (UID: "c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.658474 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-config" (OuterVolumeSpecName: "config") pod "c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4" (UID: "c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.661809 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.661843 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.661857 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.661870 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.661882 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:02 crc kubenswrapper[4624]: I1008 14:43:02.879693 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:03 crc kubenswrapper[4624]: I1008 14:43:03.089471 4624 generic.go:334] "Generic (PLEG): container finished" podID="39ab7f72-0992-4fcb-aee5-1db66762d854" containerID="89edd3840b7d46280e4927c0b64463b300e05f30937fc76b68c9b3b85764438c" exitCode=1 Oct 08 14:43:03 crc kubenswrapper[4624]: I1008 14:43:03.089555 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d66f5894-xhq8d" event={"ID":"39ab7f72-0992-4fcb-aee5-1db66762d854","Type":"ContainerDied","Data":"89edd3840b7d46280e4927c0b64463b300e05f30937fc76b68c9b3b85764438c"} Oct 08 14:43:03 crc kubenswrapper[4624]: I1008 14:43:03.089617 4624 scope.go:117] "RemoveContainer" containerID="65b3d94e1731bd0333315fd6456c9d69df79ba6b5e5c9f1610b19320800c65c2" Oct 08 14:43:03 crc kubenswrapper[4624]: I1008 14:43:03.090075 4624 scope.go:117] "RemoveContainer" containerID="89edd3840b7d46280e4927c0b64463b300e05f30937fc76b68c9b3b85764438c" Oct 08 14:43:03 crc kubenswrapper[4624]: E1008 14:43:03.090364 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5d66f5894-xhq8d_openstack(39ab7f72-0992-4fcb-aee5-1db66762d854)\"" pod="openstack/heat-api-5d66f5894-xhq8d" podUID="39ab7f72-0992-4fcb-aee5-1db66762d854" Oct 08 14:43:03 crc kubenswrapper[4624]: I1008 14:43:03.097107 4624 generic.go:334] "Generic (PLEG): container finished" podID="11363bb9-1c0c-46f9-a40f-85537a193cb5" containerID="93bb0f2c9889eeafd811040da77bbf4ffdd683b0ad8749debecf2335133125e2" exitCode=1 Oct 08 14:43:03 crc kubenswrapper[4624]: I1008 14:43:03.097203 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-775f4cbd6b-68btn" event={"ID":"11363bb9-1c0c-46f9-a40f-85537a193cb5","Type":"ContainerDied","Data":"93bb0f2c9889eeafd811040da77bbf4ffdd683b0ad8749debecf2335133125e2"} Oct 08 14:43:03 crc kubenswrapper[4624]: I1008 14:43:03.098078 4624 scope.go:117] "RemoveContainer" containerID="93bb0f2c9889eeafd811040da77bbf4ffdd683b0ad8749debecf2335133125e2" Oct 08 14:43:03 crc kubenswrapper[4624]: E1008 14:43:03.098305 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-775f4cbd6b-68btn_openstack(11363bb9-1c0c-46f9-a40f-85537a193cb5)\"" pod="openstack/heat-cfnapi-775f4cbd6b-68btn" podUID="11363bb9-1c0c-46f9-a40f-85537a193cb5" Oct 08 14:43:03 crc kubenswrapper[4624]: I1008 14:43:03.099555 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99333548-c18f-4543-8e0e-d99cd2c0b968","Type":"ContainerStarted","Data":"b8a2d1e0a35b0b8c89a02b40740d3594c64e65f4e35a37c30df1e8732cd0b6af"} Oct 08 14:43:03 crc kubenswrapper[4624]: I1008 14:43:03.101939 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" event={"ID":"c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4","Type":"ContainerDied","Data":"9ad57c2a1dce0967540a2db5b79bff65c22ddfeea43e268438202fd963f9c0da"} Oct 08 14:43:03 crc kubenswrapper[4624]: I1008 14:43:03.102103 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd4749fc-jpfh9" Oct 08 14:43:03 crc kubenswrapper[4624]: I1008 14:43:03.107729 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:43:03 crc kubenswrapper[4624]: I1008 14:43:03.123443 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:43:03 crc kubenswrapper[4624]: I1008 14:43:03.165849 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd4749fc-jpfh9"] Oct 08 14:43:03 crc kubenswrapper[4624]: I1008 14:43:03.180210 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78cd4749fc-jpfh9"] Oct 08 14:43:03 crc kubenswrapper[4624]: I1008 14:43:03.288587 4624 scope.go:117] "RemoveContainer" containerID="715888c5d7751638721e9d11da130e9f9caba642ffbc3fb0346c01213b9ce0d0" Oct 08 14:43:03 crc kubenswrapper[4624]: I1008 14:43:03.495366 4624 scope.go:117] "RemoveContainer" containerID="0f262eb10d189b691295cc4557a570209d66fc8288bcf014ffb360c98c24d295" Oct 08 14:43:03 crc kubenswrapper[4624]: I1008 14:43:03.501828 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4" path="/var/lib/kubelet/pods/c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4/volumes" Oct 08 14:43:03 crc kubenswrapper[4624]: I1008 14:43:03.544423 4624 scope.go:117] "RemoveContainer" containerID="6a359690dd7c0febd3d3529f0dfa2e7753110b2f2ea52fdb96dcf172b3324128" Oct 08 14:43:04 crc kubenswrapper[4624]: I1008 14:43:04.120613 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99333548-c18f-4543-8e0e-d99cd2c0b968","Type":"ContainerStarted","Data":"614b10bfaa83243534f54fe39e9040083c4275b485b69a5dff391e97ff49969d"} Oct 08 14:43:04 crc kubenswrapper[4624]: I1008 14:43:04.121011 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99333548-c18f-4543-8e0e-d99cd2c0b968","Type":"ContainerStarted","Data":"7affe6c7f80f91534b596b57d32c09bcfb13f0305570aa18978f35af12c3f425"} Oct 08 14:43:04 crc kubenswrapper[4624]: I1008 14:43:04.122323 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c515985c-9b57-4136-bf01-b872e9caaec9","Type":"ContainerStarted","Data":"e4e3dc8994933f18da8d106c626fcf845e3ea8e9127829d5b4a0b28a58ef2102"} Oct 08 14:43:04 crc kubenswrapper[4624]: I1008 14:43:04.139707 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.59949254 podStartE2EDuration="33.139686023s" podCreationTimestamp="2025-10-08 14:42:31 +0000 UTC" firstStartedPulling="2025-10-08 14:42:32.102233042 +0000 UTC m=+1177.253168119" lastFinishedPulling="2025-10-08 14:43:03.642426525 +0000 UTC m=+1208.793361602" observedRunningTime="2025-10-08 14:43:04.138713808 +0000 UTC m=+1209.289648895" watchObservedRunningTime="2025-10-08 14:43:04.139686023 +0000 UTC m=+1209.290621110" Oct 08 14:43:04 crc kubenswrapper[4624]: I1008 14:43:04.143845 4624 scope.go:117] "RemoveContainer" containerID="89edd3840b7d46280e4927c0b64463b300e05f30937fc76b68c9b3b85764438c" Oct 08 14:43:04 crc kubenswrapper[4624]: E1008 14:43:04.144091 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5d66f5894-xhq8d_openstack(39ab7f72-0992-4fcb-aee5-1db66762d854)\"" pod="openstack/heat-api-5d66f5894-xhq8d" podUID="39ab7f72-0992-4fcb-aee5-1db66762d854" Oct 08 14:43:04 crc kubenswrapper[4624]: I1008 14:43:04.148408 4624 scope.go:117] "RemoveContainer" containerID="93bb0f2c9889eeafd811040da77bbf4ffdd683b0ad8749debecf2335133125e2" Oct 08 14:43:04 crc kubenswrapper[4624]: E1008 14:43:04.148708 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-775f4cbd6b-68btn_openstack(11363bb9-1c0c-46f9-a40f-85537a193cb5)\"" pod="openstack/heat-cfnapi-775f4cbd6b-68btn" podUID="11363bb9-1c0c-46f9-a40f-85537a193cb5" Oct 08 14:43:04 crc kubenswrapper[4624]: I1008 14:43:04.540542 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 08 14:43:04 crc kubenswrapper[4624]: I1008 14:43:04.613699 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f6cd65c74-7vqb5" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Oct 08 14:43:04 crc kubenswrapper[4624]: I1008 14:43:04.728861 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67f45f8444-g8bbs" podUID="378be2ad-3335-409f-b2eb-60b3997ed4f8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Oct 08 14:43:05 crc kubenswrapper[4624]: I1008 14:43:05.159959 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99333548-c18f-4543-8e0e-d99cd2c0b968","Type":"ContainerStarted","Data":"7910e04bddd2f034918542210cd7d4bba89422af54502821d25e17505e78807e"} Oct 08 14:43:05 crc kubenswrapper[4624]: I1008 14:43:05.160547 4624 scope.go:117] "RemoveContainer" containerID="89edd3840b7d46280e4927c0b64463b300e05f30937fc76b68c9b3b85764438c" Oct 08 14:43:05 crc kubenswrapper[4624]: I1008 14:43:05.160786 4624 scope.go:117] "RemoveContainer" containerID="93bb0f2c9889eeafd811040da77bbf4ffdd683b0ad8749debecf2335133125e2" Oct 08 14:43:05 crc kubenswrapper[4624]: E1008 14:43:05.160903 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5d66f5894-xhq8d_openstack(39ab7f72-0992-4fcb-aee5-1db66762d854)\"" pod="openstack/heat-api-5d66f5894-xhq8d" podUID="39ab7f72-0992-4fcb-aee5-1db66762d854" Oct 08 14:43:05 crc kubenswrapper[4624]: E1008 14:43:05.161055 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-775f4cbd6b-68btn_openstack(11363bb9-1c0c-46f9-a40f-85537a193cb5)\"" pod="openstack/heat-cfnapi-775f4cbd6b-68btn" podUID="11363bb9-1c0c-46f9-a40f-85537a193cb5" Oct 08 14:43:05 crc kubenswrapper[4624]: I1008 14:43:05.171732 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:07 crc kubenswrapper[4624]: I1008 14:43:07.199798 4624 generic.go:334] "Generic (PLEG): container finished" podID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerID="8292b2f3da88f157ebfcc48a8d638fca0c20f23488256930e0389211a2d1f7aa" exitCode=1 Oct 08 14:43:07 crc kubenswrapper[4624]: I1008 14:43:07.200021 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99333548-c18f-4543-8e0e-d99cd2c0b968","Type":"ContainerDied","Data":"8292b2f3da88f157ebfcc48a8d638fca0c20f23488256930e0389211a2d1f7aa"} Oct 08 14:43:07 crc kubenswrapper[4624]: I1008 14:43:07.200209 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerName="ceilometer-central-agent" containerID="cri-o://7affe6c7f80f91534b596b57d32c09bcfb13f0305570aa18978f35af12c3f425" gracePeriod=30 Oct 08 14:43:07 crc kubenswrapper[4624]: I1008 14:43:07.200808 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerName="ceilometer-notification-agent" containerID="cri-o://614b10bfaa83243534f54fe39e9040083c4275b485b69a5dff391e97ff49969d" gracePeriod=30 Oct 08 14:43:07 crc kubenswrapper[4624]: I1008 14:43:07.200823 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerName="sg-core" containerID="cri-o://7910e04bddd2f034918542210cd7d4bba89422af54502821d25e17505e78807e" gracePeriod=30 Oct 08 14:43:08 crc kubenswrapper[4624]: I1008 14:43:08.152534 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7674ff4df6-6crwz" Oct 08 14:43:08 crc kubenswrapper[4624]: I1008 14:43:08.198595 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-588c6684fb-k5v22"] Oct 08 14:43:08 crc kubenswrapper[4624]: I1008 14:43:08.198815 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-588c6684fb-k5v22" podUID="9b419234-067e-41ef-8382-55b89cafa33a" containerName="heat-engine" containerID="cri-o://862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4" gracePeriod=60 Oct 08 14:43:08 crc kubenswrapper[4624]: E1008 14:43:08.203079 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 08 14:43:08 crc kubenswrapper[4624]: E1008 14:43:08.206898 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 08 14:43:08 crc kubenswrapper[4624]: E1008 14:43:08.208378 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 08 14:43:08 crc kubenswrapper[4624]: E1008 14:43:08.208433 4624 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-588c6684fb-k5v22" podUID="9b419234-067e-41ef-8382-55b89cafa33a" containerName="heat-engine" Oct 08 14:43:08 crc kubenswrapper[4624]: I1008 14:43:08.222790 4624 generic.go:334] "Generic (PLEG): container finished" podID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerID="7910e04bddd2f034918542210cd7d4bba89422af54502821d25e17505e78807e" exitCode=2 Oct 08 14:43:08 crc kubenswrapper[4624]: I1008 14:43:08.222823 4624 generic.go:334] "Generic (PLEG): container finished" podID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerID="614b10bfaa83243534f54fe39e9040083c4275b485b69a5dff391e97ff49969d" exitCode=0 Oct 08 14:43:08 crc kubenswrapper[4624]: I1008 14:43:08.222842 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99333548-c18f-4543-8e0e-d99cd2c0b968","Type":"ContainerDied","Data":"7910e04bddd2f034918542210cd7d4bba89422af54502821d25e17505e78807e"} Oct 08 14:43:08 crc kubenswrapper[4624]: I1008 14:43:08.222866 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99333548-c18f-4543-8e0e-d99cd2c0b968","Type":"ContainerDied","Data":"614b10bfaa83243534f54fe39e9040083c4275b485b69a5dff391e97ff49969d"} Oct 08 14:43:09 crc kubenswrapper[4624]: I1008 14:43:09.084085 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 08 14:43:09 crc kubenswrapper[4624]: I1008 14:43:09.773877 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 14:43:09 crc kubenswrapper[4624]: I1008 14:43:09.774695 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d5a01279-9918-4adc-a38f-f102fd394ca0" containerName="glance-httpd" containerID="cri-o://6f899ddbe71b42769f9453dc6bc746f5c6136d04980c21964efbbd19e42ad081" gracePeriod=30 Oct 08 14:43:09 crc kubenswrapper[4624]: I1008 14:43:09.774811 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d5a01279-9918-4adc-a38f-f102fd394ca0" containerName="glance-log" containerID="cri-o://053ad65b4c100768721f861f1f190ecf6fd2234ab5c4cde6e3c43cf11f4ba1f2" gracePeriod=30 Oct 08 14:43:10 crc kubenswrapper[4624]: I1008 14:43:10.245372 4624 generic.go:334] "Generic (PLEG): container finished" podID="d5a01279-9918-4adc-a38f-f102fd394ca0" containerID="053ad65b4c100768721f861f1f190ecf6fd2234ab5c4cde6e3c43cf11f4ba1f2" exitCode=143 Oct 08 14:43:10 crc kubenswrapper[4624]: I1008 14:43:10.245414 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5a01279-9918-4adc-a38f-f102fd394ca0","Type":"ContainerDied","Data":"053ad65b4c100768721f861f1f190ecf6fd2234ab5c4cde6e3c43cf11f4ba1f2"} Oct 08 14:43:10 crc kubenswrapper[4624]: I1008 14:43:10.398097 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7ccc99c6fd-fx7s5" Oct 08 14:43:10 crc kubenswrapper[4624]: I1008 14:43:10.452387 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-bbd685659-f7cgg" Oct 08 14:43:10 crc kubenswrapper[4624]: I1008 14:43:10.687386 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-79db6c47d5-q6dxb" Oct 08 14:43:10 crc kubenswrapper[4624]: I1008 14:43:10.775872 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5d66f5894-xhq8d"] Oct 08 14:43:11 crc kubenswrapper[4624]: E1008 14:43:11.174789 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 08 14:43:11 crc kubenswrapper[4624]: E1008 14:43:11.179755 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 08 14:43:11 crc kubenswrapper[4624]: E1008 14:43:11.192337 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 08 14:43:11 crc kubenswrapper[4624]: E1008 14:43:11.192412 4624 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-588c6684fb-k5v22" podUID="9b419234-067e-41ef-8382-55b89cafa33a" containerName="heat-engine" Oct 08 14:43:11 crc kubenswrapper[4624]: I1008 14:43:11.425858 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:43:11 crc kubenswrapper[4624]: I1008 14:43:11.559776 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ab7f72-0992-4fcb-aee5-1db66762d854-combined-ca-bundle\") pod \"39ab7f72-0992-4fcb-aee5-1db66762d854\" (UID: \"39ab7f72-0992-4fcb-aee5-1db66762d854\") " Oct 08 14:43:11 crc kubenswrapper[4624]: I1008 14:43:11.560170 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39ab7f72-0992-4fcb-aee5-1db66762d854-config-data-custom\") pod \"39ab7f72-0992-4fcb-aee5-1db66762d854\" (UID: \"39ab7f72-0992-4fcb-aee5-1db66762d854\") " Oct 08 14:43:11 crc kubenswrapper[4624]: I1008 14:43:11.560281 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39ab7f72-0992-4fcb-aee5-1db66762d854-config-data\") pod \"39ab7f72-0992-4fcb-aee5-1db66762d854\" (UID: \"39ab7f72-0992-4fcb-aee5-1db66762d854\") " Oct 08 14:43:11 crc kubenswrapper[4624]: I1008 14:43:11.560371 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xstc\" (UniqueName: \"kubernetes.io/projected/39ab7f72-0992-4fcb-aee5-1db66762d854-kube-api-access-8xstc\") pod \"39ab7f72-0992-4fcb-aee5-1db66762d854\" (UID: \"39ab7f72-0992-4fcb-aee5-1db66762d854\") " Oct 08 14:43:11 crc kubenswrapper[4624]: I1008 14:43:11.566771 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39ab7f72-0992-4fcb-aee5-1db66762d854-kube-api-access-8xstc" (OuterVolumeSpecName: "kube-api-access-8xstc") pod "39ab7f72-0992-4fcb-aee5-1db66762d854" (UID: "39ab7f72-0992-4fcb-aee5-1db66762d854"). InnerVolumeSpecName "kube-api-access-8xstc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:43:11 crc kubenswrapper[4624]: I1008 14:43:11.581792 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39ab7f72-0992-4fcb-aee5-1db66762d854-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "39ab7f72-0992-4fcb-aee5-1db66762d854" (UID: "39ab7f72-0992-4fcb-aee5-1db66762d854"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:11 crc kubenswrapper[4624]: I1008 14:43:11.606763 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39ab7f72-0992-4fcb-aee5-1db66762d854-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39ab7f72-0992-4fcb-aee5-1db66762d854" (UID: "39ab7f72-0992-4fcb-aee5-1db66762d854"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:11 crc kubenswrapper[4624]: I1008 14:43:11.640363 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39ab7f72-0992-4fcb-aee5-1db66762d854-config-data" (OuterVolumeSpecName: "config-data") pod "39ab7f72-0992-4fcb-aee5-1db66762d854" (UID: "39ab7f72-0992-4fcb-aee5-1db66762d854"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:11 crc kubenswrapper[4624]: I1008 14:43:11.666926 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ab7f72-0992-4fcb-aee5-1db66762d854-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:11 crc kubenswrapper[4624]: I1008 14:43:11.667156 4624 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39ab7f72-0992-4fcb-aee5-1db66762d854-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:11 crc kubenswrapper[4624]: I1008 14:43:11.667222 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39ab7f72-0992-4fcb-aee5-1db66762d854-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:11 crc kubenswrapper[4624]: I1008 14:43:11.667286 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xstc\" (UniqueName: \"kubernetes.io/projected/39ab7f72-0992-4fcb-aee5-1db66762d854-kube-api-access-8xstc\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:11 crc kubenswrapper[4624]: I1008 14:43:11.897888 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-c8c76b4d4-k9vfh" Oct 08 14:43:11 crc kubenswrapper[4624]: I1008 14:43:11.966909 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-775f4cbd6b-68btn"] Oct 08 14:43:12 crc kubenswrapper[4624]: I1008 14:43:12.291819 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d66f5894-xhq8d" event={"ID":"39ab7f72-0992-4fcb-aee5-1db66762d854","Type":"ContainerDied","Data":"438ace51b754fcdb62e8aac945c2e9db6dadd33908363d3cb40426b0b8faf51e"} Oct 08 14:43:12 crc kubenswrapper[4624]: I1008 14:43:12.291874 4624 scope.go:117] "RemoveContainer" containerID="89edd3840b7d46280e4927c0b64463b300e05f30937fc76b68c9b3b85764438c" Oct 08 14:43:12 crc kubenswrapper[4624]: I1008 14:43:12.292021 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5d66f5894-xhq8d" Oct 08 14:43:12 crc kubenswrapper[4624]: I1008 14:43:12.436114 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5d66f5894-xhq8d"] Oct 08 14:43:12 crc kubenswrapper[4624]: I1008 14:43:12.443457 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5d66f5894-xhq8d"] Oct 08 14:43:12 crc kubenswrapper[4624]: I1008 14:43:12.643497 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:43:12 crc kubenswrapper[4624]: I1008 14:43:12.814137 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11363bb9-1c0c-46f9-a40f-85537a193cb5-config-data\") pod \"11363bb9-1c0c-46f9-a40f-85537a193cb5\" (UID: \"11363bb9-1c0c-46f9-a40f-85537a193cb5\") " Oct 08 14:43:12 crc kubenswrapper[4624]: I1008 14:43:12.814233 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44b8x\" (UniqueName: \"kubernetes.io/projected/11363bb9-1c0c-46f9-a40f-85537a193cb5-kube-api-access-44b8x\") pod \"11363bb9-1c0c-46f9-a40f-85537a193cb5\" (UID: \"11363bb9-1c0c-46f9-a40f-85537a193cb5\") " Oct 08 14:43:12 crc kubenswrapper[4624]: I1008 14:43:12.814287 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11363bb9-1c0c-46f9-a40f-85537a193cb5-combined-ca-bundle\") pod \"11363bb9-1c0c-46f9-a40f-85537a193cb5\" (UID: \"11363bb9-1c0c-46f9-a40f-85537a193cb5\") " Oct 08 14:43:12 crc kubenswrapper[4624]: I1008 14:43:12.814402 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11363bb9-1c0c-46f9-a40f-85537a193cb5-config-data-custom\") pod \"11363bb9-1c0c-46f9-a40f-85537a193cb5\" (UID: \"11363bb9-1c0c-46f9-a40f-85537a193cb5\") " Oct 08 14:43:12 crc kubenswrapper[4624]: I1008 14:43:12.820731 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11363bb9-1c0c-46f9-a40f-85537a193cb5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "11363bb9-1c0c-46f9-a40f-85537a193cb5" (UID: "11363bb9-1c0c-46f9-a40f-85537a193cb5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:12 crc kubenswrapper[4624]: I1008 14:43:12.876068 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11363bb9-1c0c-46f9-a40f-85537a193cb5-kube-api-access-44b8x" (OuterVolumeSpecName: "kube-api-access-44b8x") pod "11363bb9-1c0c-46f9-a40f-85537a193cb5" (UID: "11363bb9-1c0c-46f9-a40f-85537a193cb5"). InnerVolumeSpecName "kube-api-access-44b8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:43:12 crc kubenswrapper[4624]: I1008 14:43:12.918874 4624 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11363bb9-1c0c-46f9-a40f-85537a193cb5-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:12 crc kubenswrapper[4624]: I1008 14:43:12.918931 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44b8x\" (UniqueName: \"kubernetes.io/projected/11363bb9-1c0c-46f9-a40f-85537a193cb5-kube-api-access-44b8x\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:12 crc kubenswrapper[4624]: I1008 14:43:12.938774 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11363bb9-1c0c-46f9-a40f-85537a193cb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11363bb9-1c0c-46f9-a40f-85537a193cb5" (UID: "11363bb9-1c0c-46f9-a40f-85537a193cb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:12 crc kubenswrapper[4624]: I1008 14:43:12.979982 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11363bb9-1c0c-46f9-a40f-85537a193cb5-config-data" (OuterVolumeSpecName: "config-data") pod "11363bb9-1c0c-46f9-a40f-85537a193cb5" (UID: "11363bb9-1c0c-46f9-a40f-85537a193cb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.020201 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11363bb9-1c0c-46f9-a40f-85537a193cb5-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.020526 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11363bb9-1c0c-46f9-a40f-85537a193cb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.279021 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-588c6684fb-k5v22" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.302509 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-775f4cbd6b-68btn" event={"ID":"11363bb9-1c0c-46f9-a40f-85537a193cb5","Type":"ContainerDied","Data":"6de274e2cef253be03aaa80ce0b2718cdcfbfb0d79d2f53d4b6d776a6a4821f7"} Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.302553 4624 scope.go:117] "RemoveContainer" containerID="93bb0f2c9889eeafd811040da77bbf4ffdd683b0ad8749debecf2335133125e2" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.302654 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-775f4cbd6b-68btn" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.309573 4624 generic.go:334] "Generic (PLEG): container finished" podID="9b419234-067e-41ef-8382-55b89cafa33a" containerID="862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4" exitCode=0 Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.309682 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-588c6684fb-k5v22" event={"ID":"9b419234-067e-41ef-8382-55b89cafa33a","Type":"ContainerDied","Data":"862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4"} Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.309711 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-588c6684fb-k5v22" event={"ID":"9b419234-067e-41ef-8382-55b89cafa33a","Type":"ContainerDied","Data":"0589f5c37fc2b02138f4d88ab91d33cd7322626627f795cc6e7fd6d82dd7749c"} Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.309776 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-588c6684fb-k5v22" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.376935 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-775f4cbd6b-68btn"] Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.389529 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-775f4cbd6b-68btn"] Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.426608 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b419234-067e-41ef-8382-55b89cafa33a-combined-ca-bundle\") pod \"9b419234-067e-41ef-8382-55b89cafa33a\" (UID: \"9b419234-067e-41ef-8382-55b89cafa33a\") " Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.426786 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht7g6\" (UniqueName: \"kubernetes.io/projected/9b419234-067e-41ef-8382-55b89cafa33a-kube-api-access-ht7g6\") pod \"9b419234-067e-41ef-8382-55b89cafa33a\" (UID: \"9b419234-067e-41ef-8382-55b89cafa33a\") " Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.426822 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b419234-067e-41ef-8382-55b89cafa33a-config-data\") pod \"9b419234-067e-41ef-8382-55b89cafa33a\" (UID: \"9b419234-067e-41ef-8382-55b89cafa33a\") " Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.426902 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b419234-067e-41ef-8382-55b89cafa33a-config-data-custom\") pod \"9b419234-067e-41ef-8382-55b89cafa33a\" (UID: \"9b419234-067e-41ef-8382-55b89cafa33a\") " Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.435618 4624 scope.go:117] "RemoveContainer" containerID="862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.443841 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b419234-067e-41ef-8382-55b89cafa33a-kube-api-access-ht7g6" (OuterVolumeSpecName: "kube-api-access-ht7g6") pod "9b419234-067e-41ef-8382-55b89cafa33a" (UID: "9b419234-067e-41ef-8382-55b89cafa33a"). InnerVolumeSpecName "kube-api-access-ht7g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.448705 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b419234-067e-41ef-8382-55b89cafa33a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9b419234-067e-41ef-8382-55b89cafa33a" (UID: "9b419234-067e-41ef-8382-55b89cafa33a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.501564 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11363bb9-1c0c-46f9-a40f-85537a193cb5" path="/var/lib/kubelet/pods/11363bb9-1c0c-46f9-a40f-85537a193cb5/volumes" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.502565 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39ab7f72-0992-4fcb-aee5-1db66762d854" path="/var/lib/kubelet/pods/39ab7f72-0992-4fcb-aee5-1db66762d854/volumes" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.522221 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b419234-067e-41ef-8382-55b89cafa33a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b419234-067e-41ef-8382-55b89cafa33a" (UID: "9b419234-067e-41ef-8382-55b89cafa33a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.529419 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b419234-067e-41ef-8382-55b89cafa33a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.529464 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht7g6\" (UniqueName: \"kubernetes.io/projected/9b419234-067e-41ef-8382-55b89cafa33a-kube-api-access-ht7g6\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.529488 4624 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b419234-067e-41ef-8382-55b89cafa33a-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.611968 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b419234-067e-41ef-8382-55b89cafa33a-config-data" (OuterVolumeSpecName: "config-data") pod "9b419234-067e-41ef-8382-55b89cafa33a" (UID: "9b419234-067e-41ef-8382-55b89cafa33a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.631171 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b419234-067e-41ef-8382-55b89cafa33a-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.686598 4624 scope.go:117] "RemoveContainer" containerID="862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4" Oct 08 14:43:13 crc kubenswrapper[4624]: E1008 14:43:13.687003 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4\": container with ID starting with 862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4 not found: ID does not exist" containerID="862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.687041 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4"} err="failed to get container status \"862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4\": rpc error: code = NotFound desc = could not find container \"862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4\": container with ID starting with 862428038abd3c46773873307abacf21670979e7d247aad4ee0e43fd2bbe6fa4 not found: ID does not exist" Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.959841 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-588c6684fb-k5v22"] Oct 08 14:43:13 crc kubenswrapper[4624]: I1008 14:43:13.975160 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-588c6684fb-k5v22"] Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.197877 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.337247 4624 generic.go:334] "Generic (PLEG): container finished" podID="d5a01279-9918-4adc-a38f-f102fd394ca0" containerID="6f899ddbe71b42769f9453dc6bc746f5c6136d04980c21964efbbd19e42ad081" exitCode=0 Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.337730 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5a01279-9918-4adc-a38f-f102fd394ca0","Type":"ContainerDied","Data":"6f899ddbe71b42769f9453dc6bc746f5c6136d04980c21964efbbd19e42ad081"} Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.337770 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5a01279-9918-4adc-a38f-f102fd394ca0","Type":"ContainerDied","Data":"46375bf0864095c9a47fd6fde417fe0dd91ea7551d39bb53571de528bf053830"} Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.337835 4624 scope.go:117] "RemoveContainer" containerID="6f899ddbe71b42769f9453dc6bc746f5c6136d04980c21964efbbd19e42ad081" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.337833 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.344705 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-config-data\") pod \"d5a01279-9918-4adc-a38f-f102fd394ca0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.344793 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-scripts\") pod \"d5a01279-9918-4adc-a38f-f102fd394ca0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.344897 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-combined-ca-bundle\") pod \"d5a01279-9918-4adc-a38f-f102fd394ca0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.344925 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bs8w\" (UniqueName: \"kubernetes.io/projected/d5a01279-9918-4adc-a38f-f102fd394ca0-kube-api-access-6bs8w\") pod \"d5a01279-9918-4adc-a38f-f102fd394ca0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.344979 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-public-tls-certs\") pod \"d5a01279-9918-4adc-a38f-f102fd394ca0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.345069 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5a01279-9918-4adc-a38f-f102fd394ca0-httpd-run\") pod \"d5a01279-9918-4adc-a38f-f102fd394ca0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.345139 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5a01279-9918-4adc-a38f-f102fd394ca0-logs\") pod \"d5a01279-9918-4adc-a38f-f102fd394ca0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.345193 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"d5a01279-9918-4adc-a38f-f102fd394ca0\" (UID: \"d5a01279-9918-4adc-a38f-f102fd394ca0\") " Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.348037 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5a01279-9918-4adc-a38f-f102fd394ca0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d5a01279-9918-4adc-a38f-f102fd394ca0" (UID: "d5a01279-9918-4adc-a38f-f102fd394ca0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.348160 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5a01279-9918-4adc-a38f-f102fd394ca0-logs" (OuterVolumeSpecName: "logs") pod "d5a01279-9918-4adc-a38f-f102fd394ca0" (UID: "d5a01279-9918-4adc-a38f-f102fd394ca0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.360247 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5a01279-9918-4adc-a38f-f102fd394ca0-kube-api-access-6bs8w" (OuterVolumeSpecName: "kube-api-access-6bs8w") pod "d5a01279-9918-4adc-a38f-f102fd394ca0" (UID: "d5a01279-9918-4adc-a38f-f102fd394ca0"). InnerVolumeSpecName "kube-api-access-6bs8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.364096 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-scripts" (OuterVolumeSpecName: "scripts") pod "d5a01279-9918-4adc-a38f-f102fd394ca0" (UID: "d5a01279-9918-4adc-a38f-f102fd394ca0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.365026 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "d5a01279-9918-4adc-a38f-f102fd394ca0" (UID: "d5a01279-9918-4adc-a38f-f102fd394ca0"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.427858 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d5a01279-9918-4adc-a38f-f102fd394ca0" (UID: "d5a01279-9918-4adc-a38f-f102fd394ca0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.445788 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5a01279-9918-4adc-a38f-f102fd394ca0" (UID: "d5a01279-9918-4adc-a38f-f102fd394ca0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.447567 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.447604 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.447624 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bs8w\" (UniqueName: \"kubernetes.io/projected/d5a01279-9918-4adc-a38f-f102fd394ca0-kube-api-access-6bs8w\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.447653 4624 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.447668 4624 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5a01279-9918-4adc-a38f-f102fd394ca0-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.447680 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5a01279-9918-4adc-a38f-f102fd394ca0-logs\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.447707 4624 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.502228 4624 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.547940 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-config-data" (OuterVolumeSpecName: "config-data") pod "d5a01279-9918-4adc-a38f-f102fd394ca0" (UID: "d5a01279-9918-4adc-a38f-f102fd394ca0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.549286 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a01279-9918-4adc-a38f-f102fd394ca0-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.549305 4624 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.566087 4624 scope.go:117] "RemoveContainer" containerID="053ad65b4c100768721f861f1f190ecf6fd2234ab5c4cde6e3c43cf11f4ba1f2" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.612255 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f6cd65c74-7vqb5" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.620604 4624 scope.go:117] "RemoveContainer" containerID="6f899ddbe71b42769f9453dc6bc746f5c6136d04980c21964efbbd19e42ad081" Oct 08 14:43:14 crc kubenswrapper[4624]: E1008 14:43:14.621307 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f899ddbe71b42769f9453dc6bc746f5c6136d04980c21964efbbd19e42ad081\": container with ID starting with 6f899ddbe71b42769f9453dc6bc746f5c6136d04980c21964efbbd19e42ad081 not found: ID does not exist" containerID="6f899ddbe71b42769f9453dc6bc746f5c6136d04980c21964efbbd19e42ad081" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.621339 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f899ddbe71b42769f9453dc6bc746f5c6136d04980c21964efbbd19e42ad081"} err="failed to get container status \"6f899ddbe71b42769f9453dc6bc746f5c6136d04980c21964efbbd19e42ad081\": rpc error: code = NotFound desc = could not find container \"6f899ddbe71b42769f9453dc6bc746f5c6136d04980c21964efbbd19e42ad081\": container with ID starting with 6f899ddbe71b42769f9453dc6bc746f5c6136d04980c21964efbbd19e42ad081 not found: ID does not exist" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.621364 4624 scope.go:117] "RemoveContainer" containerID="053ad65b4c100768721f861f1f190ecf6fd2234ab5c4cde6e3c43cf11f4ba1f2" Oct 08 14:43:14 crc kubenswrapper[4624]: E1008 14:43:14.621733 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"053ad65b4c100768721f861f1f190ecf6fd2234ab5c4cde6e3c43cf11f4ba1f2\": container with ID starting with 053ad65b4c100768721f861f1f190ecf6fd2234ab5c4cde6e3c43cf11f4ba1f2 not found: ID does not exist" containerID="053ad65b4c100768721f861f1f190ecf6fd2234ab5c4cde6e3c43cf11f4ba1f2" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.621754 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"053ad65b4c100768721f861f1f190ecf6fd2234ab5c4cde6e3c43cf11f4ba1f2"} err="failed to get container status \"053ad65b4c100768721f861f1f190ecf6fd2234ab5c4cde6e3c43cf11f4ba1f2\": rpc error: code = NotFound desc = could not find container \"053ad65b4c100768721f861f1f190ecf6fd2234ab5c4cde6e3c43cf11f4ba1f2\": container with ID starting with 053ad65b4c100768721f861f1f190ecf6fd2234ab5c4cde6e3c43cf11f4ba1f2 not found: ID does not exist" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.678505 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.688193 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.730112 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67f45f8444-g8bbs" podUID="378be2ad-3335-409f-b2eb-60b3997ed4f8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.732670 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 14:43:14 crc kubenswrapper[4624]: E1008 14:43:14.733134 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b419234-067e-41ef-8382-55b89cafa33a" containerName="heat-engine" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.733156 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b419234-067e-41ef-8382-55b89cafa33a" containerName="heat-engine" Oct 08 14:43:14 crc kubenswrapper[4624]: E1008 14:43:14.733178 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a01279-9918-4adc-a38f-f102fd394ca0" containerName="glance-httpd" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.733188 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a01279-9918-4adc-a38f-f102fd394ca0" containerName="glance-httpd" Oct 08 14:43:14 crc kubenswrapper[4624]: E1008 14:43:14.733204 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11363bb9-1c0c-46f9-a40f-85537a193cb5" containerName="heat-cfnapi" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.733212 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="11363bb9-1c0c-46f9-a40f-85537a193cb5" containerName="heat-cfnapi" Oct 08 14:43:14 crc kubenswrapper[4624]: E1008 14:43:14.733225 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4" containerName="dnsmasq-dns" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.733233 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4" containerName="dnsmasq-dns" Oct 08 14:43:14 crc kubenswrapper[4624]: E1008 14:43:14.733246 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ab7f72-0992-4fcb-aee5-1db66762d854" containerName="heat-api" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.733253 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ab7f72-0992-4fcb-aee5-1db66762d854" containerName="heat-api" Oct 08 14:43:14 crc kubenswrapper[4624]: E1008 14:43:14.733261 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a01279-9918-4adc-a38f-f102fd394ca0" containerName="glance-log" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.733268 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a01279-9918-4adc-a38f-f102fd394ca0" containerName="glance-log" Oct 08 14:43:14 crc kubenswrapper[4624]: E1008 14:43:14.733288 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ab7f72-0992-4fcb-aee5-1db66762d854" containerName="heat-api" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.733297 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ab7f72-0992-4fcb-aee5-1db66762d854" containerName="heat-api" Oct 08 14:43:14 crc kubenswrapper[4624]: E1008 14:43:14.733309 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11363bb9-1c0c-46f9-a40f-85537a193cb5" containerName="heat-cfnapi" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.733317 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="11363bb9-1c0c-46f9-a40f-85537a193cb5" containerName="heat-cfnapi" Oct 08 14:43:14 crc kubenswrapper[4624]: E1008 14:43:14.733331 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4" containerName="init" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.733339 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4" containerName="init" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.733572 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b419234-067e-41ef-8382-55b89cafa33a" containerName="heat-engine" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.733591 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5a01279-9918-4adc-a38f-f102fd394ca0" containerName="glance-httpd" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.733601 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="39ab7f72-0992-4fcb-aee5-1db66762d854" containerName="heat-api" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.733612 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="11363bb9-1c0c-46f9-a40f-85537a193cb5" containerName="heat-cfnapi" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.733624 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5a01279-9918-4adc-a38f-f102fd394ca0" containerName="glance-log" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.733658 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="39ab7f72-0992-4fcb-aee5-1db66762d854" containerName="heat-api" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.733672 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="11363bb9-1c0c-46f9-a40f-85537a193cb5" containerName="heat-cfnapi" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.733680 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0f4d22f-ee96-4fe4-87b5-fbf7da0da3d4" containerName="dnsmasq-dns" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.735011 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.737661 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.738976 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.759481 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.840701 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-clh26"] Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.841953 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-clh26" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.853952 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.854003 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5719b9ea-1496-4097-b86f-39e516f37a0d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.854023 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5719b9ea-1496-4097-b86f-39e516f37a0d-config-data\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.854050 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzx2x\" (UniqueName: \"kubernetes.io/projected/5719b9ea-1496-4097-b86f-39e516f37a0d-kube-api-access-bzx2x\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.854282 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5719b9ea-1496-4097-b86f-39e516f37a0d-scripts\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.854379 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5719b9ea-1496-4097-b86f-39e516f37a0d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.854419 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5719b9ea-1496-4097-b86f-39e516f37a0d-logs\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.854544 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5719b9ea-1496-4097-b86f-39e516f37a0d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.864348 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-clh26"] Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.961713 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgl4v\" (UniqueName: \"kubernetes.io/projected/8e0de22c-50a2-4388-b13e-ff3935165e8b-kube-api-access-vgl4v\") pod \"nova-api-db-create-clh26\" (UID: \"8e0de22c-50a2-4388-b13e-ff3935165e8b\") " pod="openstack/nova-api-db-create-clh26" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.961782 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5719b9ea-1496-4097-b86f-39e516f37a0d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.961830 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.961872 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5719b9ea-1496-4097-b86f-39e516f37a0d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.961902 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5719b9ea-1496-4097-b86f-39e516f37a0d-config-data\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.961935 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzx2x\" (UniqueName: \"kubernetes.io/projected/5719b9ea-1496-4097-b86f-39e516f37a0d-kube-api-access-bzx2x\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.962008 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5719b9ea-1496-4097-b86f-39e516f37a0d-scripts\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.962048 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5719b9ea-1496-4097-b86f-39e516f37a0d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.962071 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5719b9ea-1496-4097-b86f-39e516f37a0d-logs\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.962654 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5719b9ea-1496-4097-b86f-39e516f37a0d-logs\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.966248 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Oct 08 14:43:14 crc kubenswrapper[4624]: I1008 14:43:14.967982 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5719b9ea-1496-4097-b86f-39e516f37a0d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.007133 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5719b9ea-1496-4097-b86f-39e516f37a0d-scripts\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.011004 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5719b9ea-1496-4097-b86f-39e516f37a0d-config-data\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.011614 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5719b9ea-1496-4097-b86f-39e516f37a0d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.014900 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5719b9ea-1496-4097-b86f-39e516f37a0d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.015037 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzx2x\" (UniqueName: \"kubernetes.io/projected/5719b9ea-1496-4097-b86f-39e516f37a0d-kube-api-access-bzx2x\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.021164 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-dcjsw"] Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.023423 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dcjsw" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.036127 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dcjsw"] Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.065859 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgl4v\" (UniqueName: \"kubernetes.io/projected/8e0de22c-50a2-4388-b13e-ff3935165e8b-kube-api-access-vgl4v\") pod \"nova-api-db-create-clh26\" (UID: \"8e0de22c-50a2-4388-b13e-ff3935165e8b\") " pod="openstack/nova-api-db-create-clh26" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.074826 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-4cqc4"] Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.077079 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4cqc4" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.083143 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4cqc4"] Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.092048 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgl4v\" (UniqueName: \"kubernetes.io/projected/8e0de22c-50a2-4388-b13e-ff3935165e8b-kube-api-access-vgl4v\") pod \"nova-api-db-create-clh26\" (UID: \"8e0de22c-50a2-4388-b13e-ff3935165e8b\") " pod="openstack/nova-api-db-create-clh26" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.104473 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"5719b9ea-1496-4097-b86f-39e516f37a0d\") " pod="openstack/glance-default-external-api-0" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.169705 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdznn\" (UniqueName: \"kubernetes.io/projected/bbe86e77-ec3a-4807-9420-e402d309dc89-kube-api-access-wdznn\") pod \"nova-cell1-db-create-4cqc4\" (UID: \"bbe86e77-ec3a-4807-9420-e402d309dc89\") " pod="openstack/nova-cell1-db-create-4cqc4" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.169834 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp5jn\" (UniqueName: \"kubernetes.io/projected/4fc96968-af4b-45f5-8d90-b4866b4029fe-kube-api-access-jp5jn\") pod \"nova-cell0-db-create-dcjsw\" (UID: \"4fc96968-af4b-45f5-8d90-b4866b4029fe\") " pod="openstack/nova-cell0-db-create-dcjsw" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.180198 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-clh26" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.271477 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdznn\" (UniqueName: \"kubernetes.io/projected/bbe86e77-ec3a-4807-9420-e402d309dc89-kube-api-access-wdznn\") pod \"nova-cell1-db-create-4cqc4\" (UID: \"bbe86e77-ec3a-4807-9420-e402d309dc89\") " pod="openstack/nova-cell1-db-create-4cqc4" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.271589 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp5jn\" (UniqueName: \"kubernetes.io/projected/4fc96968-af4b-45f5-8d90-b4866b4029fe-kube-api-access-jp5jn\") pod \"nova-cell0-db-create-dcjsw\" (UID: \"4fc96968-af4b-45f5-8d90-b4866b4029fe\") " pod="openstack/nova-cell0-db-create-dcjsw" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.288215 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp5jn\" (UniqueName: \"kubernetes.io/projected/4fc96968-af4b-45f5-8d90-b4866b4029fe-kube-api-access-jp5jn\") pod \"nova-cell0-db-create-dcjsw\" (UID: \"4fc96968-af4b-45f5-8d90-b4866b4029fe\") " pod="openstack/nova-cell0-db-create-dcjsw" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.297683 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdznn\" (UniqueName: \"kubernetes.io/projected/bbe86e77-ec3a-4807-9420-e402d309dc89-kube-api-access-wdznn\") pod \"nova-cell1-db-create-4cqc4\" (UID: \"bbe86e77-ec3a-4807-9420-e402d309dc89\") " pod="openstack/nova-cell1-db-create-4cqc4" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.355282 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.386294 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dcjsw" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.410914 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4cqc4" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.505818 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b419234-067e-41ef-8382-55b89cafa33a" path="/var/lib/kubelet/pods/9b419234-067e-41ef-8382-55b89cafa33a/volumes" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.506664 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5a01279-9918-4adc-a38f-f102fd394ca0" path="/var/lib/kubelet/pods/d5a01279-9918-4adc-a38f-f102fd394ca0/volumes" Oct 08 14:43:15 crc kubenswrapper[4624]: I1008 14:43:15.754254 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-clh26"] Oct 08 14:43:16 crc kubenswrapper[4624]: I1008 14:43:16.123260 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 14:43:16 crc kubenswrapper[4624]: I1008 14:43:16.216279 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dcjsw"] Oct 08 14:43:16 crc kubenswrapper[4624]: I1008 14:43:16.341608 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4cqc4"] Oct 08 14:43:16 crc kubenswrapper[4624]: I1008 14:43:16.498272 4624 generic.go:334] "Generic (PLEG): container finished" podID="8e0de22c-50a2-4388-b13e-ff3935165e8b" containerID="c92d14841cba9845cde9d9fc900bf900e9aeea3e6c7f45363b23f09342cb30f9" exitCode=0 Oct 08 14:43:16 crc kubenswrapper[4624]: I1008 14:43:16.498786 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-clh26" event={"ID":"8e0de22c-50a2-4388-b13e-ff3935165e8b","Type":"ContainerDied","Data":"c92d14841cba9845cde9d9fc900bf900e9aeea3e6c7f45363b23f09342cb30f9"} Oct 08 14:43:16 crc kubenswrapper[4624]: I1008 14:43:16.498819 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-clh26" event={"ID":"8e0de22c-50a2-4388-b13e-ff3935165e8b","Type":"ContainerStarted","Data":"0c576c0f97f81f7d72b0632abe3514392f87dcbea10b425a587f931dd92522e3"} Oct 08 14:43:16 crc kubenswrapper[4624]: I1008 14:43:16.518058 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4cqc4" event={"ID":"bbe86e77-ec3a-4807-9420-e402d309dc89","Type":"ContainerStarted","Data":"89dd7f72778d53df5f298e0f6b4ac46630f5ff3c87116dd9c2a15e4c8240689d"} Oct 08 14:43:16 crc kubenswrapper[4624]: I1008 14:43:16.521931 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5719b9ea-1496-4097-b86f-39e516f37a0d","Type":"ContainerStarted","Data":"bfcb53a56a9875400c7a96d3516a73a89699e2c9071758a23e3057fd7e51ac58"} Oct 08 14:43:16 crc kubenswrapper[4624]: I1008 14:43:16.524485 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dcjsw" event={"ID":"4fc96968-af4b-45f5-8d90-b4866b4029fe","Type":"ContainerStarted","Data":"1138d0184463d2747dd3900b6fb620ed9da3a5512ac8d4601b4444a872870844"} Oct 08 14:43:17 crc kubenswrapper[4624]: I1008 14:43:17.563779 4624 generic.go:334] "Generic (PLEG): container finished" podID="bbe86e77-ec3a-4807-9420-e402d309dc89" containerID="78d5e1b581cb301339174eea098474f59f6c2af43fbc9a790b85c45ea2cc00e2" exitCode=0 Oct 08 14:43:17 crc kubenswrapper[4624]: I1008 14:43:17.564029 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4cqc4" event={"ID":"bbe86e77-ec3a-4807-9420-e402d309dc89","Type":"ContainerDied","Data":"78d5e1b581cb301339174eea098474f59f6c2af43fbc9a790b85c45ea2cc00e2"} Oct 08 14:43:17 crc kubenswrapper[4624]: I1008 14:43:17.570005 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5719b9ea-1496-4097-b86f-39e516f37a0d","Type":"ContainerStarted","Data":"f58c0b173ea26c00df514b4259b31cc9bbed77844449b5a7de5781af1942429d"} Oct 08 14:43:17 crc kubenswrapper[4624]: I1008 14:43:17.583228 4624 generic.go:334] "Generic (PLEG): container finished" podID="4fc96968-af4b-45f5-8d90-b4866b4029fe" containerID="e7580b7db9931c4df2af3cf55e6073bdf9da5553f2887b5f903edc26ecaa27c3" exitCode=0 Oct 08 14:43:17 crc kubenswrapper[4624]: I1008 14:43:17.583910 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dcjsw" event={"ID":"4fc96968-af4b-45f5-8d90-b4866b4029fe","Type":"ContainerDied","Data":"e7580b7db9931c4df2af3cf55e6073bdf9da5553f2887b5f903edc26ecaa27c3"} Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.217607 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-clh26" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.294414 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgl4v\" (UniqueName: \"kubernetes.io/projected/8e0de22c-50a2-4388-b13e-ff3935165e8b-kube-api-access-vgl4v\") pod \"8e0de22c-50a2-4388-b13e-ff3935165e8b\" (UID: \"8e0de22c-50a2-4388-b13e-ff3935165e8b\") " Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.315989 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e0de22c-50a2-4388-b13e-ff3935165e8b-kube-api-access-vgl4v" (OuterVolumeSpecName: "kube-api-access-vgl4v") pod "8e0de22c-50a2-4388-b13e-ff3935165e8b" (UID: "8e0de22c-50a2-4388-b13e-ff3935165e8b"). InnerVolumeSpecName "kube-api-access-vgl4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.398362 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgl4v\" (UniqueName: \"kubernetes.io/projected/8e0de22c-50a2-4388-b13e-ff3935165e8b-kube-api-access-vgl4v\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.442608 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.499848 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-sg-core-conf-yaml\") pod \"99333548-c18f-4543-8e0e-d99cd2c0b968\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.499896 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-config-data\") pod \"99333548-c18f-4543-8e0e-d99cd2c0b968\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.499931 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99333548-c18f-4543-8e0e-d99cd2c0b968-log-httpd\") pod \"99333548-c18f-4543-8e0e-d99cd2c0b968\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.499968 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-scripts\") pod \"99333548-c18f-4543-8e0e-d99cd2c0b968\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.500116 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fn8l\" (UniqueName: \"kubernetes.io/projected/99333548-c18f-4543-8e0e-d99cd2c0b968-kube-api-access-2fn8l\") pod \"99333548-c18f-4543-8e0e-d99cd2c0b968\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.500159 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99333548-c18f-4543-8e0e-d99cd2c0b968-run-httpd\") pod \"99333548-c18f-4543-8e0e-d99cd2c0b968\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.500221 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-combined-ca-bundle\") pod \"99333548-c18f-4543-8e0e-d99cd2c0b968\" (UID: \"99333548-c18f-4543-8e0e-d99cd2c0b968\") " Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.500395 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99333548-c18f-4543-8e0e-d99cd2c0b968-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "99333548-c18f-4543-8e0e-d99cd2c0b968" (UID: "99333548-c18f-4543-8e0e-d99cd2c0b968"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.500602 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99333548-c18f-4543-8e0e-d99cd2c0b968-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "99333548-c18f-4543-8e0e-d99cd2c0b968" (UID: "99333548-c18f-4543-8e0e-d99cd2c0b968"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.500926 4624 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99333548-c18f-4543-8e0e-d99cd2c0b968-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.500948 4624 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99333548-c18f-4543-8e0e-d99cd2c0b968-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.512032 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99333548-c18f-4543-8e0e-d99cd2c0b968-kube-api-access-2fn8l" (OuterVolumeSpecName: "kube-api-access-2fn8l") pod "99333548-c18f-4543-8e0e-d99cd2c0b968" (UID: "99333548-c18f-4543-8e0e-d99cd2c0b968"). InnerVolumeSpecName "kube-api-access-2fn8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.521815 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-scripts" (OuterVolumeSpecName: "scripts") pod "99333548-c18f-4543-8e0e-d99cd2c0b968" (UID: "99333548-c18f-4543-8e0e-d99cd2c0b968"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.559084 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "99333548-c18f-4543-8e0e-d99cd2c0b968" (UID: "99333548-c18f-4543-8e0e-d99cd2c0b968"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.602339 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fn8l\" (UniqueName: \"kubernetes.io/projected/99333548-c18f-4543-8e0e-d99cd2c0b968-kube-api-access-2fn8l\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.603556 4624 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.603706 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.615472 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-clh26" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.615541 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-clh26" event={"ID":"8e0de22c-50a2-4388-b13e-ff3935165e8b","Type":"ContainerDied","Data":"0c576c0f97f81f7d72b0632abe3514392f87dcbea10b425a587f931dd92522e3"} Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.615587 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c576c0f97f81f7d72b0632abe3514392f87dcbea10b425a587f931dd92522e3" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.637346 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99333548-c18f-4543-8e0e-d99cd2c0b968" (UID: "99333548-c18f-4543-8e0e-d99cd2c0b968"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.638896 4624 generic.go:334] "Generic (PLEG): container finished" podID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerID="7affe6c7f80f91534b596b57d32c09bcfb13f0305570aa18978f35af12c3f425" exitCode=0 Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.639002 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99333548-c18f-4543-8e0e-d99cd2c0b968","Type":"ContainerDied","Data":"7affe6c7f80f91534b596b57d32c09bcfb13f0305570aa18978f35af12c3f425"} Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.639000 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.639036 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99333548-c18f-4543-8e0e-d99cd2c0b968","Type":"ContainerDied","Data":"b8a2d1e0a35b0b8c89a02b40740d3594c64e65f4e35a37c30df1e8732cd0b6af"} Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.639055 4624 scope.go:117] "RemoveContainer" containerID="8292b2f3da88f157ebfcc48a8d638fca0c20f23488256930e0389211a2d1f7aa" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.646344 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5719b9ea-1496-4097-b86f-39e516f37a0d","Type":"ContainerStarted","Data":"92140c793109e63694b266da8686fd5bae5e122c152db6bc39999f6d4cd4a7e3"} Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.686970 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.686955023 podStartE2EDuration="4.686955023s" podCreationTimestamp="2025-10-08 14:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:43:18.684948092 +0000 UTC m=+1223.835883159" watchObservedRunningTime="2025-10-08 14:43:18.686955023 +0000 UTC m=+1223.837890100" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.707084 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.727791 4624 scope.go:117] "RemoveContainer" containerID="7910e04bddd2f034918542210cd7d4bba89422af54502821d25e17505e78807e" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.799805 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-config-data" (OuterVolumeSpecName: "config-data") pod "99333548-c18f-4543-8e0e-d99cd2c0b968" (UID: "99333548-c18f-4543-8e0e-d99cd2c0b968"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.809691 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99333548-c18f-4543-8e0e-d99cd2c0b968-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.846653 4624 scope.go:117] "RemoveContainer" containerID="614b10bfaa83243534f54fe39e9040083c4275b485b69a5dff391e97ff49969d" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.880347 4624 scope.go:117] "RemoveContainer" containerID="7affe6c7f80f91534b596b57d32c09bcfb13f0305570aa18978f35af12c3f425" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.927818 4624 scope.go:117] "RemoveContainer" containerID="8292b2f3da88f157ebfcc48a8d638fca0c20f23488256930e0389211a2d1f7aa" Oct 08 14:43:18 crc kubenswrapper[4624]: E1008 14:43:18.933418 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8292b2f3da88f157ebfcc48a8d638fca0c20f23488256930e0389211a2d1f7aa\": container with ID starting with 8292b2f3da88f157ebfcc48a8d638fca0c20f23488256930e0389211a2d1f7aa not found: ID does not exist" containerID="8292b2f3da88f157ebfcc48a8d638fca0c20f23488256930e0389211a2d1f7aa" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.933479 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8292b2f3da88f157ebfcc48a8d638fca0c20f23488256930e0389211a2d1f7aa"} err="failed to get container status \"8292b2f3da88f157ebfcc48a8d638fca0c20f23488256930e0389211a2d1f7aa\": rpc error: code = NotFound desc = could not find container \"8292b2f3da88f157ebfcc48a8d638fca0c20f23488256930e0389211a2d1f7aa\": container with ID starting with 8292b2f3da88f157ebfcc48a8d638fca0c20f23488256930e0389211a2d1f7aa not found: ID does not exist" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.933523 4624 scope.go:117] "RemoveContainer" containerID="7910e04bddd2f034918542210cd7d4bba89422af54502821d25e17505e78807e" Oct 08 14:43:18 crc kubenswrapper[4624]: E1008 14:43:18.936082 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7910e04bddd2f034918542210cd7d4bba89422af54502821d25e17505e78807e\": container with ID starting with 7910e04bddd2f034918542210cd7d4bba89422af54502821d25e17505e78807e not found: ID does not exist" containerID="7910e04bddd2f034918542210cd7d4bba89422af54502821d25e17505e78807e" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.936125 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7910e04bddd2f034918542210cd7d4bba89422af54502821d25e17505e78807e"} err="failed to get container status \"7910e04bddd2f034918542210cd7d4bba89422af54502821d25e17505e78807e\": rpc error: code = NotFound desc = could not find container \"7910e04bddd2f034918542210cd7d4bba89422af54502821d25e17505e78807e\": container with ID starting with 7910e04bddd2f034918542210cd7d4bba89422af54502821d25e17505e78807e not found: ID does not exist" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.936170 4624 scope.go:117] "RemoveContainer" containerID="614b10bfaa83243534f54fe39e9040083c4275b485b69a5dff391e97ff49969d" Oct 08 14:43:18 crc kubenswrapper[4624]: E1008 14:43:18.945785 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"614b10bfaa83243534f54fe39e9040083c4275b485b69a5dff391e97ff49969d\": container with ID starting with 614b10bfaa83243534f54fe39e9040083c4275b485b69a5dff391e97ff49969d not found: ID does not exist" containerID="614b10bfaa83243534f54fe39e9040083c4275b485b69a5dff391e97ff49969d" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.945839 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"614b10bfaa83243534f54fe39e9040083c4275b485b69a5dff391e97ff49969d"} err="failed to get container status \"614b10bfaa83243534f54fe39e9040083c4275b485b69a5dff391e97ff49969d\": rpc error: code = NotFound desc = could not find container \"614b10bfaa83243534f54fe39e9040083c4275b485b69a5dff391e97ff49969d\": container with ID starting with 614b10bfaa83243534f54fe39e9040083c4275b485b69a5dff391e97ff49969d not found: ID does not exist" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.945870 4624 scope.go:117] "RemoveContainer" containerID="7affe6c7f80f91534b596b57d32c09bcfb13f0305570aa18978f35af12c3f425" Oct 08 14:43:18 crc kubenswrapper[4624]: E1008 14:43:18.954806 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7affe6c7f80f91534b596b57d32c09bcfb13f0305570aa18978f35af12c3f425\": container with ID starting with 7affe6c7f80f91534b596b57d32c09bcfb13f0305570aa18978f35af12c3f425 not found: ID does not exist" containerID="7affe6c7f80f91534b596b57d32c09bcfb13f0305570aa18978f35af12c3f425" Oct 08 14:43:18 crc kubenswrapper[4624]: I1008 14:43:18.954858 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7affe6c7f80f91534b596b57d32c09bcfb13f0305570aa18978f35af12c3f425"} err="failed to get container status \"7affe6c7f80f91534b596b57d32c09bcfb13f0305570aa18978f35af12c3f425\": rpc error: code = NotFound desc = could not find container \"7affe6c7f80f91534b596b57d32c09bcfb13f0305570aa18978f35af12c3f425\": container with ID starting with 7affe6c7f80f91534b596b57d32c09bcfb13f0305570aa18978f35af12c3f425 not found: ID does not exist" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.018254 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.032181 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.065880 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:19 crc kubenswrapper[4624]: E1008 14:43:19.066243 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerName="proxy-httpd" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.066255 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerName="proxy-httpd" Oct 08 14:43:19 crc kubenswrapper[4624]: E1008 14:43:19.066277 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerName="sg-core" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.066283 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerName="sg-core" Oct 08 14:43:19 crc kubenswrapper[4624]: E1008 14:43:19.066292 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e0de22c-50a2-4388-b13e-ff3935165e8b" containerName="mariadb-database-create" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.066297 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e0de22c-50a2-4388-b13e-ff3935165e8b" containerName="mariadb-database-create" Oct 08 14:43:19 crc kubenswrapper[4624]: E1008 14:43:19.066317 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerName="ceilometer-central-agent" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.066324 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerName="ceilometer-central-agent" Oct 08 14:43:19 crc kubenswrapper[4624]: E1008 14:43:19.066335 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerName="ceilometer-notification-agent" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.066341 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerName="ceilometer-notification-agent" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.066538 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerName="sg-core" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.066550 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e0de22c-50a2-4388-b13e-ff3935165e8b" containerName="mariadb-database-create" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.066570 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerName="ceilometer-central-agent" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.066577 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerName="proxy-httpd" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.066586 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="99333548-c18f-4543-8e0e-d99cd2c0b968" containerName="ceilometer-notification-agent" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.068204 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.077864 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.079314 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.080112 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.157693 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.227617 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqsww\" (UniqueName: \"kubernetes.io/projected/9f6a083e-a430-43e5-b0e7-cde0b6801986-kube-api-access-fqsww\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.227878 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-config-data\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.227962 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.228073 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.228163 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f6a083e-a430-43e5-b0e7-cde0b6801986-log-httpd\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.228251 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f6a083e-a430-43e5-b0e7-cde0b6801986-run-httpd\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.228403 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.228441 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-scripts\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.246351 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4cqc4" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.326081 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dcjsw" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.330682 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqsww\" (UniqueName: \"kubernetes.io/projected/9f6a083e-a430-43e5-b0e7-cde0b6801986-kube-api-access-fqsww\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.330735 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-config-data\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.330760 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.330797 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.330819 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f6a083e-a430-43e5-b0e7-cde0b6801986-log-httpd\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.330865 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f6a083e-a430-43e5-b0e7-cde0b6801986-run-httpd\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.330888 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.330904 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-scripts\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.333181 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f6a083e-a430-43e5-b0e7-cde0b6801986-run-httpd\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.333385 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f6a083e-a430-43e5-b0e7-cde0b6801986-log-httpd\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.339124 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.339148 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-scripts\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.339220 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.344395 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-config-data\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.344879 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.352946 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqsww\" (UniqueName: \"kubernetes.io/projected/9f6a083e-a430-43e5-b0e7-cde0b6801986-kube-api-access-fqsww\") pod \"ceilometer-0\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.415223 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.431732 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp5jn\" (UniqueName: \"kubernetes.io/projected/4fc96968-af4b-45f5-8d90-b4866b4029fe-kube-api-access-jp5jn\") pod \"4fc96968-af4b-45f5-8d90-b4866b4029fe\" (UID: \"4fc96968-af4b-45f5-8d90-b4866b4029fe\") " Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.431991 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdznn\" (UniqueName: \"kubernetes.io/projected/bbe86e77-ec3a-4807-9420-e402d309dc89-kube-api-access-wdznn\") pod \"bbe86e77-ec3a-4807-9420-e402d309dc89\" (UID: \"bbe86e77-ec3a-4807-9420-e402d309dc89\") " Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.440876 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fc96968-af4b-45f5-8d90-b4866b4029fe-kube-api-access-jp5jn" (OuterVolumeSpecName: "kube-api-access-jp5jn") pod "4fc96968-af4b-45f5-8d90-b4866b4029fe" (UID: "4fc96968-af4b-45f5-8d90-b4866b4029fe"). InnerVolumeSpecName "kube-api-access-jp5jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.440992 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe86e77-ec3a-4807-9420-e402d309dc89-kube-api-access-wdznn" (OuterVolumeSpecName: "kube-api-access-wdznn") pod "bbe86e77-ec3a-4807-9420-e402d309dc89" (UID: "bbe86e77-ec3a-4807-9420-e402d309dc89"). InnerVolumeSpecName "kube-api-access-wdznn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.493867 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99333548-c18f-4543-8e0e-d99cd2c0b968" path="/var/lib/kubelet/pods/99333548-c18f-4543-8e0e-d99cd2c0b968/volumes" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.534136 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp5jn\" (UniqueName: \"kubernetes.io/projected/4fc96968-af4b-45f5-8d90-b4866b4029fe-kube-api-access-jp5jn\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.534352 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdznn\" (UniqueName: \"kubernetes.io/projected/bbe86e77-ec3a-4807-9420-e402d309dc89-kube-api-access-wdznn\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.705903 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4cqc4" event={"ID":"bbe86e77-ec3a-4807-9420-e402d309dc89","Type":"ContainerDied","Data":"89dd7f72778d53df5f298e0f6b4ac46630f5ff3c87116dd9c2a15e4c8240689d"} Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.705934 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89dd7f72778d53df5f298e0f6b4ac46630f5ff3c87116dd9c2a15e4c8240689d" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.705992 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4cqc4" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.720378 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dcjsw" event={"ID":"4fc96968-af4b-45f5-8d90-b4866b4029fe","Type":"ContainerDied","Data":"1138d0184463d2747dd3900b6fb620ed9da3a5512ac8d4601b4444a872870844"} Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.720453 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1138d0184463d2747dd3900b6fb620ed9da3a5512ac8d4601b4444a872870844" Oct 08 14:43:19 crc kubenswrapper[4624]: I1008 14:43:19.720551 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dcjsw" Oct 08 14:43:20 crc kubenswrapper[4624]: I1008 14:43:20.065699 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:20 crc kubenswrapper[4624]: W1008 14:43:20.075882 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f6a083e_a430_43e5_b0e7_cde0b6801986.slice/crio-a5ea0f29e8ef0740f6acc039694d8892a0d1dcc6aa57286d2376d8eaf72afb4e WatchSource:0}: Error finding container a5ea0f29e8ef0740f6acc039694d8892a0d1dcc6aa57286d2376d8eaf72afb4e: Status 404 returned error can't find the container with id a5ea0f29e8ef0740f6acc039694d8892a0d1dcc6aa57286d2376d8eaf72afb4e Oct 08 14:43:20 crc kubenswrapper[4624]: I1008 14:43:20.758741 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f6a083e-a430-43e5-b0e7-cde0b6801986","Type":"ContainerStarted","Data":"725adcb0716670ee444ee9dd9d584c172a9fb89a97fa996a1d8b72c815b9b0b1"} Oct 08 14:43:20 crc kubenswrapper[4624]: I1008 14:43:20.763819 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f6a083e-a430-43e5-b0e7-cde0b6801986","Type":"ContainerStarted","Data":"a5ea0f29e8ef0740f6acc039694d8892a0d1dcc6aa57286d2376d8eaf72afb4e"} Oct 08 14:43:21 crc kubenswrapper[4624]: I1008 14:43:21.498226 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 14:43:21 crc kubenswrapper[4624]: I1008 14:43:21.498819 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="92d52b0f-221f-4cc7-9157-3fef68ed6db6" containerName="glance-log" containerID="cri-o://0370beffcd9029ca9aa02c016176968c9aa0a70ea4ddea1040bd12e0e46896fc" gracePeriod=30 Oct 08 14:43:21 crc kubenswrapper[4624]: I1008 14:43:21.498966 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="92d52b0f-221f-4cc7-9157-3fef68ed6db6" containerName="glance-httpd" containerID="cri-o://f627a6d3e3509d80894084fb697794ea1607f15a77e3a1e1d0be006c52a9a14c" gracePeriod=30 Oct 08 14:43:21 crc kubenswrapper[4624]: I1008 14:43:21.771233 4624 generic.go:334] "Generic (PLEG): container finished" podID="92d52b0f-221f-4cc7-9157-3fef68ed6db6" containerID="0370beffcd9029ca9aa02c016176968c9aa0a70ea4ddea1040bd12e0e46896fc" exitCode=143 Oct 08 14:43:21 crc kubenswrapper[4624]: I1008 14:43:21.771320 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"92d52b0f-221f-4cc7-9157-3fef68ed6db6","Type":"ContainerDied","Data":"0370beffcd9029ca9aa02c016176968c9aa0a70ea4ddea1040bd12e0e46896fc"} Oct 08 14:43:21 crc kubenswrapper[4624]: I1008 14:43:21.773911 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f6a083e-a430-43e5-b0e7-cde0b6801986","Type":"ContainerStarted","Data":"f15c7028b09f795119ae44762e5a19c9b419b53eeb32590924d3deea617a6249"} Oct 08 14:43:21 crc kubenswrapper[4624]: I1008 14:43:21.773950 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f6a083e-a430-43e5-b0e7-cde0b6801986","Type":"ContainerStarted","Data":"63bdb4cf09e0d7708e6fa813c9015b0fa2983784b0cc6cad32e382cdb3c47693"} Oct 08 14:43:23 crc kubenswrapper[4624]: I1008 14:43:23.754544 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:24 crc kubenswrapper[4624]: I1008 14:43:24.612046 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f6cd65c74-7vqb5" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Oct 08 14:43:24 crc kubenswrapper[4624]: I1008 14:43:24.612741 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:43:24 crc kubenswrapper[4624]: I1008 14:43:24.613837 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"2adb37d7e1e3ae4257d4557e08b6dab44a21981eeeec3209e1f7922febff5e54"} pod="openstack/horizon-6f6cd65c74-7vqb5" containerMessage="Container horizon failed startup probe, will be restarted" Oct 08 14:43:24 crc kubenswrapper[4624]: I1008 14:43:24.613952 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f6cd65c74-7vqb5" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" containerID="cri-o://2adb37d7e1e3ae4257d4557e08b6dab44a21981eeeec3209e1f7922febff5e54" gracePeriod=30 Oct 08 14:43:24 crc kubenswrapper[4624]: I1008 14:43:24.719788 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67f45f8444-g8bbs" podUID="378be2ad-3335-409f-b2eb-60b3997ed4f8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Oct 08 14:43:24 crc kubenswrapper[4624]: I1008 14:43:24.719861 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:43:24 crc kubenswrapper[4624]: I1008 14:43:24.720764 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"da715b96eb2640f5105e03011f83947a4366c774c3377e422913cdf2a5cee143"} pod="openstack/horizon-67f45f8444-g8bbs" containerMessage="Container horizon failed startup probe, will be restarted" Oct 08 14:43:24 crc kubenswrapper[4624]: I1008 14:43:24.720806 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67f45f8444-g8bbs" podUID="378be2ad-3335-409f-b2eb-60b3997ed4f8" containerName="horizon" containerID="cri-o://da715b96eb2640f5105e03011f83947a4366c774c3377e422913cdf2a5cee143" gracePeriod=30 Oct 08 14:43:24 crc kubenswrapper[4624]: I1008 14:43:24.743166 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="92d52b0f-221f-4cc7-9157-3fef68ed6db6" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.171:9292/healthcheck\": read tcp 10.217.0.2:33236->10.217.0.171:9292: read: connection reset by peer" Oct 08 14:43:24 crc kubenswrapper[4624]: I1008 14:43:24.743174 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="92d52b0f-221f-4cc7-9157-3fef68ed6db6" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.171:9292/healthcheck\": read tcp 10.217.0.2:33222->10.217.0.171:9292: read: connection reset by peer" Oct 08 14:43:24 crc kubenswrapper[4624]: I1008 14:43:24.801563 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f6a083e-a430-43e5-b0e7-cde0b6801986","Type":"ContainerStarted","Data":"81d7020e8fd2ee713efdddc70593076cfc3ad528dcfd090a661e86971f2768f0"} Oct 08 14:43:24 crc kubenswrapper[4624]: I1008 14:43:24.802598 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 14:43:24 crc kubenswrapper[4624]: I1008 14:43:24.802043 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerName="proxy-httpd" containerID="cri-o://81d7020e8fd2ee713efdddc70593076cfc3ad528dcfd090a661e86971f2768f0" gracePeriod=30 Oct 08 14:43:24 crc kubenswrapper[4624]: I1008 14:43:24.802057 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerName="sg-core" containerID="cri-o://f15c7028b09f795119ae44762e5a19c9b419b53eeb32590924d3deea617a6249" gracePeriod=30 Oct 08 14:43:24 crc kubenswrapper[4624]: I1008 14:43:24.802073 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerName="ceilometer-notification-agent" containerID="cri-o://63bdb4cf09e0d7708e6fa813c9015b0fa2983784b0cc6cad32e382cdb3c47693" gracePeriod=30 Oct 08 14:43:24 crc kubenswrapper[4624]: I1008 14:43:24.801682 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerName="ceilometer-central-agent" containerID="cri-o://725adcb0716670ee444ee9dd9d584c172a9fb89a97fa996a1d8b72c815b9b0b1" gracePeriod=30 Oct 08 14:43:24 crc kubenswrapper[4624]: I1008 14:43:24.866711 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5631895289999997 podStartE2EDuration="5.866688257s" podCreationTimestamp="2025-10-08 14:43:19 +0000 UTC" firstStartedPulling="2025-10-08 14:43:20.077663614 +0000 UTC m=+1225.228598691" lastFinishedPulling="2025-10-08 14:43:23.381162342 +0000 UTC m=+1228.532097419" observedRunningTime="2025-10-08 14:43:24.851877363 +0000 UTC m=+1230.002812450" watchObservedRunningTime="2025-10-08 14:43:24.866688257 +0000 UTC m=+1230.017623334" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.128138 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-4d3c-account-create-jcsvm"] Oct 08 14:43:25 crc kubenswrapper[4624]: E1008 14:43:25.128541 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe86e77-ec3a-4807-9420-e402d309dc89" containerName="mariadb-database-create" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.128557 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe86e77-ec3a-4807-9420-e402d309dc89" containerName="mariadb-database-create" Oct 08 14:43:25 crc kubenswrapper[4624]: E1008 14:43:25.128603 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fc96968-af4b-45f5-8d90-b4866b4029fe" containerName="mariadb-database-create" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.128609 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fc96968-af4b-45f5-8d90-b4866b4029fe" containerName="mariadb-database-create" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.128814 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbe86e77-ec3a-4807-9420-e402d309dc89" containerName="mariadb-database-create" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.128828 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fc96968-af4b-45f5-8d90-b4866b4029fe" containerName="mariadb-database-create" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.129458 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4d3c-account-create-jcsvm" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.131120 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.143195 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4d3c-account-create-jcsvm"] Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.254231 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krmjj\" (UniqueName: \"kubernetes.io/projected/348ff6b7-b99d-45ee-91a2-f680c29ae8f3-kube-api-access-krmjj\") pod \"nova-api-4d3c-account-create-jcsvm\" (UID: \"348ff6b7-b99d-45ee-91a2-f680c29ae8f3\") " pod="openstack/nova-api-4d3c-account-create-jcsvm" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.324730 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-077a-account-create-sfqj9"] Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.325849 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-077a-account-create-sfqj9" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.329431 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.354872 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-077a-account-create-sfqj9"] Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.356725 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krmjj\" (UniqueName: \"kubernetes.io/projected/348ff6b7-b99d-45ee-91a2-f680c29ae8f3-kube-api-access-krmjj\") pod \"nova-api-4d3c-account-create-jcsvm\" (UID: \"348ff6b7-b99d-45ee-91a2-f680c29ae8f3\") " pod="openstack/nova-api-4d3c-account-create-jcsvm" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.357324 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.357359 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.397257 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krmjj\" (UniqueName: \"kubernetes.io/projected/348ff6b7-b99d-45ee-91a2-f680c29ae8f3-kube-api-access-krmjj\") pod \"nova-api-4d3c-account-create-jcsvm\" (UID: \"348ff6b7-b99d-45ee-91a2-f680c29ae8f3\") " pod="openstack/nova-api-4d3c-account-create-jcsvm" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.445785 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4d3c-account-create-jcsvm" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.456213 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.457724 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57pgv\" (UniqueName: \"kubernetes.io/projected/0191b5d5-f9f7-49b3-8732-a35447adf088-kube-api-access-57pgv\") pod \"nova-cell0-077a-account-create-sfqj9\" (UID: \"0191b5d5-f9f7-49b3-8732-a35447adf088\") " pod="openstack/nova-cell0-077a-account-create-sfqj9" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.508073 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.560007 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-653b-account-create-p9cxk"] Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.560889 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57pgv\" (UniqueName: \"kubernetes.io/projected/0191b5d5-f9f7-49b3-8732-a35447adf088-kube-api-access-57pgv\") pod \"nova-cell0-077a-account-create-sfqj9\" (UID: \"0191b5d5-f9f7-49b3-8732-a35447adf088\") " pod="openstack/nova-cell0-077a-account-create-sfqj9" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.561153 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-653b-account-create-p9cxk" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.571329 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.598619 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57pgv\" (UniqueName: \"kubernetes.io/projected/0191b5d5-f9f7-49b3-8732-a35447adf088-kube-api-access-57pgv\") pod \"nova-cell0-077a-account-create-sfqj9\" (UID: \"0191b5d5-f9f7-49b3-8732-a35447adf088\") " pod="openstack/nova-cell0-077a-account-create-sfqj9" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.608694 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-653b-account-create-p9cxk"] Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.662795 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d65lk\" (UniqueName: \"kubernetes.io/projected/2f36a442-e41e-472f-9c08-bc93c70c4fb5-kube-api-access-d65lk\") pod \"nova-cell1-653b-account-create-p9cxk\" (UID: \"2f36a442-e41e-472f-9c08-bc93c70c4fb5\") " pod="openstack/nova-cell1-653b-account-create-p9cxk" Oct 08 14:43:25 crc kubenswrapper[4624]: E1008 14:43:25.681566 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f6a083e_a430_43e5_b0e7_cde0b6801986.slice/crio-81d7020e8fd2ee713efdddc70593076cfc3ad528dcfd090a661e86971f2768f0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f6a083e_a430_43e5_b0e7_cde0b6801986.slice/crio-conmon-63bdb4cf09e0d7708e6fa813c9015b0fa2983784b0cc6cad32e382cdb3c47693.scope\": RecentStats: unable to find data in memory cache]" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.746293 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-077a-account-create-sfqj9" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.765053 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d65lk\" (UniqueName: \"kubernetes.io/projected/2f36a442-e41e-472f-9c08-bc93c70c4fb5-kube-api-access-d65lk\") pod \"nova-cell1-653b-account-create-p9cxk\" (UID: \"2f36a442-e41e-472f-9c08-bc93c70c4fb5\") " pod="openstack/nova-cell1-653b-account-create-p9cxk" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.790375 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d65lk\" (UniqueName: \"kubernetes.io/projected/2f36a442-e41e-472f-9c08-bc93c70c4fb5-kube-api-access-d65lk\") pod \"nova-cell1-653b-account-create-p9cxk\" (UID: \"2f36a442-e41e-472f-9c08-bc93c70c4fb5\") " pod="openstack/nova-cell1-653b-account-create-p9cxk" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.900398 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-653b-account-create-p9cxk" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.940211 4624 generic.go:334] "Generic (PLEG): container finished" podID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerID="81d7020e8fd2ee713efdddc70593076cfc3ad528dcfd090a661e86971f2768f0" exitCode=0 Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.940252 4624 generic.go:334] "Generic (PLEG): container finished" podID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerID="f15c7028b09f795119ae44762e5a19c9b419b53eeb32590924d3deea617a6249" exitCode=2 Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.940263 4624 generic.go:334] "Generic (PLEG): container finished" podID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerID="63bdb4cf09e0d7708e6fa813c9015b0fa2983784b0cc6cad32e382cdb3c47693" exitCode=0 Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.940315 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f6a083e-a430-43e5-b0e7-cde0b6801986","Type":"ContainerDied","Data":"81d7020e8fd2ee713efdddc70593076cfc3ad528dcfd090a661e86971f2768f0"} Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.940345 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f6a083e-a430-43e5-b0e7-cde0b6801986","Type":"ContainerDied","Data":"f15c7028b09f795119ae44762e5a19c9b419b53eeb32590924d3deea617a6249"} Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.940360 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f6a083e-a430-43e5-b0e7-cde0b6801986","Type":"ContainerDied","Data":"63bdb4cf09e0d7708e6fa813c9015b0fa2983784b0cc6cad32e382cdb3c47693"} Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.986173 4624 generic.go:334] "Generic (PLEG): container finished" podID="92d52b0f-221f-4cc7-9157-3fef68ed6db6" containerID="f627a6d3e3509d80894084fb697794ea1607f15a77e3a1e1d0be006c52a9a14c" exitCode=0 Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.987841 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"92d52b0f-221f-4cc7-9157-3fef68ed6db6","Type":"ContainerDied","Data":"f627a6d3e3509d80894084fb697794ea1607f15a77e3a1e1d0be006c52a9a14c"} Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.987893 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 08 14:43:25 crc kubenswrapper[4624]: I1008 14:43:25.987908 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.051166 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.083457 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-internal-tls-certs\") pod \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.083600 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-config-data\") pod \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.083658 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-scripts\") pod \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.083693 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d52b0f-221f-4cc7-9157-3fef68ed6db6-logs\") pod \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.083748 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-combined-ca-bundle\") pod \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.083772 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhfvf\" (UniqueName: \"kubernetes.io/projected/92d52b0f-221f-4cc7-9157-3fef68ed6db6-kube-api-access-xhfvf\") pod \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.083814 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/92d52b0f-221f-4cc7-9157-3fef68ed6db6-httpd-run\") pod \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.083845 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\" (UID: \"92d52b0f-221f-4cc7-9157-3fef68ed6db6\") " Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.086289 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92d52b0f-221f-4cc7-9157-3fef68ed6db6-logs" (OuterVolumeSpecName: "logs") pod "92d52b0f-221f-4cc7-9157-3fef68ed6db6" (UID: "92d52b0f-221f-4cc7-9157-3fef68ed6db6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.086729 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92d52b0f-221f-4cc7-9157-3fef68ed6db6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "92d52b0f-221f-4cc7-9157-3fef68ed6db6" (UID: "92d52b0f-221f-4cc7-9157-3fef68ed6db6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.107866 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d52b0f-221f-4cc7-9157-3fef68ed6db6-kube-api-access-xhfvf" (OuterVolumeSpecName: "kube-api-access-xhfvf") pod "92d52b0f-221f-4cc7-9157-3fef68ed6db6" (UID: "92d52b0f-221f-4cc7-9157-3fef68ed6db6"). InnerVolumeSpecName "kube-api-access-xhfvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.120233 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-scripts" (OuterVolumeSpecName: "scripts") pod "92d52b0f-221f-4cc7-9157-3fef68ed6db6" (UID: "92d52b0f-221f-4cc7-9157-3fef68ed6db6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.132804 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "92d52b0f-221f-4cc7-9157-3fef68ed6db6" (UID: "92d52b0f-221f-4cc7-9157-3fef68ed6db6"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.151351 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92d52b0f-221f-4cc7-9157-3fef68ed6db6" (UID: "92d52b0f-221f-4cc7-9157-3fef68ed6db6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.187559 4624 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.187615 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.187626 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d52b0f-221f-4cc7-9157-3fef68ed6db6-logs\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.187664 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.187678 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhfvf\" (UniqueName: \"kubernetes.io/projected/92d52b0f-221f-4cc7-9157-3fef68ed6db6-kube-api-access-xhfvf\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.187690 4624 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/92d52b0f-221f-4cc7-9157-3fef68ed6db6-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.248068 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "92d52b0f-221f-4cc7-9157-3fef68ed6db6" (UID: "92d52b0f-221f-4cc7-9157-3fef68ed6db6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.268947 4624 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.276593 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4d3c-account-create-jcsvm"] Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.288820 4624 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.288846 4624 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:26 crc kubenswrapper[4624]: W1008 14:43:26.291099 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod348ff6b7_b99d_45ee_91a2_f680c29ae8f3.slice/crio-3517874bedae0d0687284114606792bee23e9dbe00d46712bd14c528df04eb47 WatchSource:0}: Error finding container 3517874bedae0d0687284114606792bee23e9dbe00d46712bd14c528df04eb47: Status 404 returned error can't find the container with id 3517874bedae0d0687284114606792bee23e9dbe00d46712bd14c528df04eb47 Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.322415 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-config-data" (OuterVolumeSpecName: "config-data") pod "92d52b0f-221f-4cc7-9157-3fef68ed6db6" (UID: "92d52b0f-221f-4cc7-9157-3fef68ed6db6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.411353 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d52b0f-221f-4cc7-9157-3fef68ed6db6-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.672411 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-653b-account-create-p9cxk"] Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.833285 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-077a-account-create-sfqj9"] Oct 08 14:43:26 crc kubenswrapper[4624]: W1008 14:43:26.849112 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0191b5d5_f9f7_49b3_8732_a35447adf088.slice/crio-14fe4c4d4433461deb3151a3223f03ad528fbba5a8d3d7fcbbe6650bf9b3fe1b WatchSource:0}: Error finding container 14fe4c4d4433461deb3151a3223f03ad528fbba5a8d3d7fcbbe6650bf9b3fe1b: Status 404 returned error can't find the container with id 14fe4c4d4433461deb3151a3223f03ad528fbba5a8d3d7fcbbe6650bf9b3fe1b Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.996155 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-653b-account-create-p9cxk" event={"ID":"2f36a442-e41e-472f-9c08-bc93c70c4fb5","Type":"ContainerStarted","Data":"0fc4c598656fddc91bd1b3cf9b35448d21a31d6f79a85c6c4da7b6355a73962f"} Oct 08 14:43:26 crc kubenswrapper[4624]: I1008 14:43:26.999122 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-077a-account-create-sfqj9" event={"ID":"0191b5d5-f9f7-49b3-8732-a35447adf088","Type":"ContainerStarted","Data":"14fe4c4d4433461deb3151a3223f03ad528fbba5a8d3d7fcbbe6650bf9b3fe1b"} Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.004035 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"92d52b0f-221f-4cc7-9157-3fef68ed6db6","Type":"ContainerDied","Data":"6c3bcdbc830096e2a99c2e4c904e7ccf0ba820515aba0ad9665cda0dbf58ecc3"} Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.004087 4624 scope.go:117] "RemoveContainer" containerID="f627a6d3e3509d80894084fb697794ea1607f15a77e3a1e1d0be006c52a9a14c" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.004410 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.007571 4624 generic.go:334] "Generic (PLEG): container finished" podID="348ff6b7-b99d-45ee-91a2-f680c29ae8f3" containerID="1a43a24af53bebd6d486446158327fa5e7d4c116fc47acd313af065951398d6c" exitCode=0 Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.007647 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4d3c-account-create-jcsvm" event={"ID":"348ff6b7-b99d-45ee-91a2-f680c29ae8f3","Type":"ContainerDied","Data":"1a43a24af53bebd6d486446158327fa5e7d4c116fc47acd313af065951398d6c"} Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.007691 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4d3c-account-create-jcsvm" event={"ID":"348ff6b7-b99d-45ee-91a2-f680c29ae8f3","Type":"ContainerStarted","Data":"3517874bedae0d0687284114606792bee23e9dbe00d46712bd14c528df04eb47"} Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.039235 4624 scope.go:117] "RemoveContainer" containerID="0370beffcd9029ca9aa02c016176968c9aa0a70ea4ddea1040bd12e0e46896fc" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.042931 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.054357 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.073198 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 14:43:27 crc kubenswrapper[4624]: E1008 14:43:27.073570 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d52b0f-221f-4cc7-9157-3fef68ed6db6" containerName="glance-httpd" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.073585 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d52b0f-221f-4cc7-9157-3fef68ed6db6" containerName="glance-httpd" Oct 08 14:43:27 crc kubenswrapper[4624]: E1008 14:43:27.073624 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d52b0f-221f-4cc7-9157-3fef68ed6db6" containerName="glance-log" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.083059 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d52b0f-221f-4cc7-9157-3fef68ed6db6" containerName="glance-log" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.083425 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d52b0f-221f-4cc7-9157-3fef68ed6db6" containerName="glance-log" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.083465 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d52b0f-221f-4cc7-9157-3fef68ed6db6" containerName="glance-httpd" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.084434 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.090616 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.095053 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.095230 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.233414 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.233576 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.233652 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdv2q\" (UniqueName: \"kubernetes.io/projected/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-kube-api-access-sdv2q\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.233797 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.233847 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.233897 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-logs\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.234118 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.234164 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.336055 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.336120 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdv2q\" (UniqueName: \"kubernetes.io/projected/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-kube-api-access-sdv2q\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.336151 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.336167 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.336184 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-logs\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.336248 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.336267 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.336302 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.336614 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.336937 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.337048 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-logs\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.343863 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.344473 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.345065 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.346111 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.368492 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.370420 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdv2q\" (UniqueName: \"kubernetes.io/projected/36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1-kube-api-access-sdv2q\") pod \"glance-default-internal-api-0\" (UID: \"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1\") " pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.432056 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 14:43:27 crc kubenswrapper[4624]: I1008 14:43:27.494971 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d52b0f-221f-4cc7-9157-3fef68ed6db6" path="/var/lib/kubelet/pods/92d52b0f-221f-4cc7-9157-3fef68ed6db6/volumes" Oct 08 14:43:28 crc kubenswrapper[4624]: I1008 14:43:28.070072 4624 generic.go:334] "Generic (PLEG): container finished" podID="2f36a442-e41e-472f-9c08-bc93c70c4fb5" containerID="836668c3044f019a13428b7c757760be148b1a14b579562570e0d69dd0c2bf9b" exitCode=0 Oct 08 14:43:28 crc kubenswrapper[4624]: I1008 14:43:28.071705 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-653b-account-create-p9cxk" event={"ID":"2f36a442-e41e-472f-9c08-bc93c70c4fb5","Type":"ContainerDied","Data":"836668c3044f019a13428b7c757760be148b1a14b579562570e0d69dd0c2bf9b"} Oct 08 14:43:28 crc kubenswrapper[4624]: I1008 14:43:28.088262 4624 generic.go:334] "Generic (PLEG): container finished" podID="0191b5d5-f9f7-49b3-8732-a35447adf088" containerID="8d7fd75808e5f17a6866c23ce3154e1b3a6924ce6931b4905d408e577b1c0d53" exitCode=0 Oct 08 14:43:28 crc kubenswrapper[4624]: I1008 14:43:28.088431 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-077a-account-create-sfqj9" event={"ID":"0191b5d5-f9f7-49b3-8732-a35447adf088","Type":"ContainerDied","Data":"8d7fd75808e5f17a6866c23ce3154e1b3a6924ce6931b4905d408e577b1c0d53"} Oct 08 14:43:28 crc kubenswrapper[4624]: I1008 14:43:28.096238 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 14:43:28 crc kubenswrapper[4624]: I1008 14:43:28.096438 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 14:43:28 crc kubenswrapper[4624]: I1008 14:43:28.224224 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 14:43:28 crc kubenswrapper[4624]: I1008 14:43:28.636949 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4d3c-account-create-jcsvm" Oct 08 14:43:28 crc kubenswrapper[4624]: I1008 14:43:28.780227 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krmjj\" (UniqueName: \"kubernetes.io/projected/348ff6b7-b99d-45ee-91a2-f680c29ae8f3-kube-api-access-krmjj\") pod \"348ff6b7-b99d-45ee-91a2-f680c29ae8f3\" (UID: \"348ff6b7-b99d-45ee-91a2-f680c29ae8f3\") " Oct 08 14:43:28 crc kubenswrapper[4624]: I1008 14:43:28.790872 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/348ff6b7-b99d-45ee-91a2-f680c29ae8f3-kube-api-access-krmjj" (OuterVolumeSpecName: "kube-api-access-krmjj") pod "348ff6b7-b99d-45ee-91a2-f680c29ae8f3" (UID: "348ff6b7-b99d-45ee-91a2-f680c29ae8f3"). InnerVolumeSpecName "kube-api-access-krmjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:43:28 crc kubenswrapper[4624]: I1008 14:43:28.883605 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krmjj\" (UniqueName: \"kubernetes.io/projected/348ff6b7-b99d-45ee-91a2-f680c29ae8f3-kube-api-access-krmjj\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:29 crc kubenswrapper[4624]: I1008 14:43:29.107953 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1","Type":"ContainerStarted","Data":"9f8a3f46d043cff0e5ab7119b525e41c78843c0199950315ca437a8ad008fc47"} Oct 08 14:43:29 crc kubenswrapper[4624]: I1008 14:43:29.111936 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4d3c-account-create-jcsvm" Oct 08 14:43:29 crc kubenswrapper[4624]: I1008 14:43:29.120827 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4d3c-account-create-jcsvm" event={"ID":"348ff6b7-b99d-45ee-91a2-f680c29ae8f3","Type":"ContainerDied","Data":"3517874bedae0d0687284114606792bee23e9dbe00d46712bd14c528df04eb47"} Oct 08 14:43:29 crc kubenswrapper[4624]: I1008 14:43:29.120893 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3517874bedae0d0687284114606792bee23e9dbe00d46712bd14c528df04eb47" Oct 08 14:43:29 crc kubenswrapper[4624]: I1008 14:43:29.514264 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-653b-account-create-p9cxk" Oct 08 14:43:29 crc kubenswrapper[4624]: I1008 14:43:29.619209 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d65lk\" (UniqueName: \"kubernetes.io/projected/2f36a442-e41e-472f-9c08-bc93c70c4fb5-kube-api-access-d65lk\") pod \"2f36a442-e41e-472f-9c08-bc93c70c4fb5\" (UID: \"2f36a442-e41e-472f-9c08-bc93c70c4fb5\") " Oct 08 14:43:29 crc kubenswrapper[4624]: I1008 14:43:29.685060 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f36a442-e41e-472f-9c08-bc93c70c4fb5-kube-api-access-d65lk" (OuterVolumeSpecName: "kube-api-access-d65lk") pod "2f36a442-e41e-472f-9c08-bc93c70c4fb5" (UID: "2f36a442-e41e-472f-9c08-bc93c70c4fb5"). InnerVolumeSpecName "kube-api-access-d65lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:43:29 crc kubenswrapper[4624]: I1008 14:43:29.698624 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-077a-account-create-sfqj9" Oct 08 14:43:29 crc kubenswrapper[4624]: I1008 14:43:29.727986 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d65lk\" (UniqueName: \"kubernetes.io/projected/2f36a442-e41e-472f-9c08-bc93c70c4fb5-kube-api-access-d65lk\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:29 crc kubenswrapper[4624]: I1008 14:43:29.829224 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57pgv\" (UniqueName: \"kubernetes.io/projected/0191b5d5-f9f7-49b3-8732-a35447adf088-kube-api-access-57pgv\") pod \"0191b5d5-f9f7-49b3-8732-a35447adf088\" (UID: \"0191b5d5-f9f7-49b3-8732-a35447adf088\") " Oct 08 14:43:29 crc kubenswrapper[4624]: I1008 14:43:29.833883 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0191b5d5-f9f7-49b3-8732-a35447adf088-kube-api-access-57pgv" (OuterVolumeSpecName: "kube-api-access-57pgv") pod "0191b5d5-f9f7-49b3-8732-a35447adf088" (UID: "0191b5d5-f9f7-49b3-8732-a35447adf088"). InnerVolumeSpecName "kube-api-access-57pgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:43:29 crc kubenswrapper[4624]: I1008 14:43:29.931287 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57pgv\" (UniqueName: \"kubernetes.io/projected/0191b5d5-f9f7-49b3-8732-a35447adf088-kube-api-access-57pgv\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:30 crc kubenswrapper[4624]: I1008 14:43:30.125118 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-653b-account-create-p9cxk" Oct 08 14:43:30 crc kubenswrapper[4624]: I1008 14:43:30.125116 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-653b-account-create-p9cxk" event={"ID":"2f36a442-e41e-472f-9c08-bc93c70c4fb5","Type":"ContainerDied","Data":"0fc4c598656fddc91bd1b3cf9b35448d21a31d6f79a85c6c4da7b6355a73962f"} Oct 08 14:43:30 crc kubenswrapper[4624]: I1008 14:43:30.125899 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fc4c598656fddc91bd1b3cf9b35448d21a31d6f79a85c6c4da7b6355a73962f" Oct 08 14:43:30 crc kubenswrapper[4624]: I1008 14:43:30.128218 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-077a-account-create-sfqj9" event={"ID":"0191b5d5-f9f7-49b3-8732-a35447adf088","Type":"ContainerDied","Data":"14fe4c4d4433461deb3151a3223f03ad528fbba5a8d3d7fcbbe6650bf9b3fe1b"} Oct 08 14:43:30 crc kubenswrapper[4624]: I1008 14:43:30.128271 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14fe4c4d4433461deb3151a3223f03ad528fbba5a8d3d7fcbbe6650bf9b3fe1b" Oct 08 14:43:30 crc kubenswrapper[4624]: I1008 14:43:30.128336 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-077a-account-create-sfqj9" Oct 08 14:43:30 crc kubenswrapper[4624]: I1008 14:43:30.141176 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1","Type":"ContainerStarted","Data":"b30b8af24070302f984745dc3e29bcfc389a6811973accb223f4e1758e890abd"} Oct 08 14:43:30 crc kubenswrapper[4624]: I1008 14:43:30.141219 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1","Type":"ContainerStarted","Data":"a1aee42dbf6b6361e32192fc5167a196a91ac16ac3c72c1f93022271f0465922"} Oct 08 14:43:30 crc kubenswrapper[4624]: I1008 14:43:30.171495 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.171472277 podStartE2EDuration="3.171472277s" podCreationTimestamp="2025-10-08 14:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:43:30.164002589 +0000 UTC m=+1235.314937666" watchObservedRunningTime="2025-10-08 14:43:30.171472277 +0000 UTC m=+1235.322407354" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.051189 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.051531 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.167096 4624 generic.go:334] "Generic (PLEG): container finished" podID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerID="725adcb0716670ee444ee9dd9d584c172a9fb89a97fa996a1d8b72c815b9b0b1" exitCode=0 Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.167258 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f6a083e-a430-43e5-b0e7-cde0b6801986","Type":"ContainerDied","Data":"725adcb0716670ee444ee9dd9d584c172a9fb89a97fa996a1d8b72c815b9b0b1"} Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.167398 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f6a083e-a430-43e5-b0e7-cde0b6801986","Type":"ContainerDied","Data":"a5ea0f29e8ef0740f6acc039694d8892a0d1dcc6aa57286d2376d8eaf72afb4e"} Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.167409 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5ea0f29e8ef0740f6acc039694d8892a0d1dcc6aa57286d2376d8eaf72afb4e" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.225963 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.230519 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.374413 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-scripts\") pod \"9f6a083e-a430-43e5-b0e7-cde0b6801986\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.374481 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-combined-ca-bundle\") pod \"9f6a083e-a430-43e5-b0e7-cde0b6801986\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.374525 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-ceilometer-tls-certs\") pod \"9f6a083e-a430-43e5-b0e7-cde0b6801986\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.374555 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-config-data\") pod \"9f6a083e-a430-43e5-b0e7-cde0b6801986\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.374590 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f6a083e-a430-43e5-b0e7-cde0b6801986-log-httpd\") pod \"9f6a083e-a430-43e5-b0e7-cde0b6801986\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.374991 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-sg-core-conf-yaml\") pod \"9f6a083e-a430-43e5-b0e7-cde0b6801986\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.375117 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f6a083e-a430-43e5-b0e7-cde0b6801986-run-httpd\") pod \"9f6a083e-a430-43e5-b0e7-cde0b6801986\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.375182 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsww\" (UniqueName: \"kubernetes.io/projected/9f6a083e-a430-43e5-b0e7-cde0b6801986-kube-api-access-fqsww\") pod \"9f6a083e-a430-43e5-b0e7-cde0b6801986\" (UID: \"9f6a083e-a430-43e5-b0e7-cde0b6801986\") " Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.376989 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f6a083e-a430-43e5-b0e7-cde0b6801986-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9f6a083e-a430-43e5-b0e7-cde0b6801986" (UID: "9f6a083e-a430-43e5-b0e7-cde0b6801986"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.378534 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f6a083e-a430-43e5-b0e7-cde0b6801986-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9f6a083e-a430-43e5-b0e7-cde0b6801986" (UID: "9f6a083e-a430-43e5-b0e7-cde0b6801986"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.391428 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-scripts" (OuterVolumeSpecName: "scripts") pod "9f6a083e-a430-43e5-b0e7-cde0b6801986" (UID: "9f6a083e-a430-43e5-b0e7-cde0b6801986"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.437471 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f6a083e-a430-43e5-b0e7-cde0b6801986-kube-api-access-fqsww" (OuterVolumeSpecName: "kube-api-access-fqsww") pod "9f6a083e-a430-43e5-b0e7-cde0b6801986" (UID: "9f6a083e-a430-43e5-b0e7-cde0b6801986"). InnerVolumeSpecName "kube-api-access-fqsww". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.463393 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9f6a083e-a430-43e5-b0e7-cde0b6801986" (UID: "9f6a083e-a430-43e5-b0e7-cde0b6801986"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.483514 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsww\" (UniqueName: \"kubernetes.io/projected/9f6a083e-a430-43e5-b0e7-cde0b6801986-kube-api-access-fqsww\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.483547 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.483559 4624 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f6a083e-a430-43e5-b0e7-cde0b6801986-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.483571 4624 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.483582 4624 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f6a083e-a430-43e5-b0e7-cde0b6801986-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.559767 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9f6a083e-a430-43e5-b0e7-cde0b6801986" (UID: "9f6a083e-a430-43e5-b0e7-cde0b6801986"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.587308 4624 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.605515 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-config-data" (OuterVolumeSpecName: "config-data") pod "9f6a083e-a430-43e5-b0e7-cde0b6801986" (UID: "9f6a083e-a430-43e5-b0e7-cde0b6801986"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.605795 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f6a083e-a430-43e5-b0e7-cde0b6801986" (UID: "9f6a083e-a430-43e5-b0e7-cde0b6801986"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.689235 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:32 crc kubenswrapper[4624]: I1008 14:43:32.689274 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f6a083e-a430-43e5-b0e7-cde0b6801986-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.174867 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.207713 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.219423 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.243009 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:33 crc kubenswrapper[4624]: E1008 14:43:33.243421 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0191b5d5-f9f7-49b3-8732-a35447adf088" containerName="mariadb-account-create" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.243442 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0191b5d5-f9f7-49b3-8732-a35447adf088" containerName="mariadb-account-create" Oct 08 14:43:33 crc kubenswrapper[4624]: E1008 14:43:33.243491 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f36a442-e41e-472f-9c08-bc93c70c4fb5" containerName="mariadb-account-create" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.243498 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f36a442-e41e-472f-9c08-bc93c70c4fb5" containerName="mariadb-account-create" Oct 08 14:43:33 crc kubenswrapper[4624]: E1008 14:43:33.243530 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerName="proxy-httpd" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.243541 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerName="proxy-httpd" Oct 08 14:43:33 crc kubenswrapper[4624]: E1008 14:43:33.243552 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348ff6b7-b99d-45ee-91a2-f680c29ae8f3" containerName="mariadb-account-create" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.243558 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="348ff6b7-b99d-45ee-91a2-f680c29ae8f3" containerName="mariadb-account-create" Oct 08 14:43:33 crc kubenswrapper[4624]: E1008 14:43:33.243574 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerName="sg-core" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.243580 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerName="sg-core" Oct 08 14:43:33 crc kubenswrapper[4624]: E1008 14:43:33.243592 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerName="ceilometer-notification-agent" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.243601 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerName="ceilometer-notification-agent" Oct 08 14:43:33 crc kubenswrapper[4624]: E1008 14:43:33.243612 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerName="ceilometer-central-agent" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.243618 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerName="ceilometer-central-agent" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.243800 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerName="sg-core" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.243816 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerName="proxy-httpd" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.243823 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="0191b5d5-f9f7-49b3-8732-a35447adf088" containerName="mariadb-account-create" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.243832 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="348ff6b7-b99d-45ee-91a2-f680c29ae8f3" containerName="mariadb-account-create" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.243853 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerName="ceilometer-notification-agent" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.243865 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f6a083e-a430-43e5-b0e7-cde0b6801986" containerName="ceilometer-central-agent" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.243874 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f36a442-e41e-472f-9c08-bc93c70c4fb5" containerName="mariadb-account-create" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.249902 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.260673 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.261241 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.264322 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.271463 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.301180 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd470738-ff59-431b-85f3-3b79af71bb99-run-httpd\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.301320 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-scripts\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.301347 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-config-data\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.301380 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd470738-ff59-431b-85f3-3b79af71bb99-log-httpd\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.301408 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsd57\" (UniqueName: \"kubernetes.io/projected/fd470738-ff59-431b-85f3-3b79af71bb99-kube-api-access-vsd57\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.301425 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.301455 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.301478 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.404965 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd470738-ff59-431b-85f3-3b79af71bb99-run-httpd\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.405118 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-scripts\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.405151 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-config-data\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.405197 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd470738-ff59-431b-85f3-3b79af71bb99-log-httpd\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.405241 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsd57\" (UniqueName: \"kubernetes.io/projected/fd470738-ff59-431b-85f3-3b79af71bb99-kube-api-access-vsd57\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.405268 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.405323 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.405359 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.405431 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd470738-ff59-431b-85f3-3b79af71bb99-run-httpd\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.405719 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd470738-ff59-431b-85f3-3b79af71bb99-log-httpd\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.412446 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.415255 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-scripts\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.416147 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-config-data\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.420022 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.420868 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.451610 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsd57\" (UniqueName: \"kubernetes.io/projected/fd470738-ff59-431b-85f3-3b79af71bb99-kube-api-access-vsd57\") pod \"ceilometer-0\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " pod="openstack/ceilometer-0" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.498524 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f6a083e-a430-43e5-b0e7-cde0b6801986" path="/var/lib/kubelet/pods/9f6a083e-a430-43e5-b0e7-cde0b6801986/volumes" Oct 08 14:43:33 crc kubenswrapper[4624]: I1008 14:43:33.564002 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:43:34 crc kubenswrapper[4624]: I1008 14:43:34.155027 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:34 crc kubenswrapper[4624]: I1008 14:43:34.200561 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 14:43:35 crc kubenswrapper[4624]: I1008 14:43:35.194398 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd470738-ff59-431b-85f3-3b79af71bb99","Type":"ContainerStarted","Data":"c11efb289121acb0fdcda926c133ea8f9c8f2742d41e3b755347fc6d0efbd1e2"} Oct 08 14:43:35 crc kubenswrapper[4624]: I1008 14:43:35.194953 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd470738-ff59-431b-85f3-3b79af71bb99","Type":"ContainerStarted","Data":"9f2d1b9bac36d0119d093bd28bb4db190c6a563b1754624c1c141d22dcd4aa23"} Oct 08 14:43:35 crc kubenswrapper[4624]: I1008 14:43:35.194965 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd470738-ff59-431b-85f3-3b79af71bb99","Type":"ContainerStarted","Data":"42124a71c9119778d985021255b48ecc4848439bfedaa48f2ad60d0203053d00"} Oct 08 14:43:35 crc kubenswrapper[4624]: I1008 14:43:35.714222 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kfw4n"] Oct 08 14:43:35 crc kubenswrapper[4624]: I1008 14:43:35.723723 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kfw4n" Oct 08 14:43:35 crc kubenswrapper[4624]: I1008 14:43:35.747887 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 08 14:43:35 crc kubenswrapper[4624]: I1008 14:43:35.748136 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-26sdn" Oct 08 14:43:35 crc kubenswrapper[4624]: I1008 14:43:35.748351 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 08 14:43:35 crc kubenswrapper[4624]: I1008 14:43:35.782607 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kfw4n"] Oct 08 14:43:35 crc kubenswrapper[4624]: I1008 14:43:35.914827 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c299418-f575-43a5-9a75-9fe358a8770c-scripts\") pod \"nova-cell0-conductor-db-sync-kfw4n\" (UID: \"9c299418-f575-43a5-9a75-9fe358a8770c\") " pod="openstack/nova-cell0-conductor-db-sync-kfw4n" Oct 08 14:43:35 crc kubenswrapper[4624]: I1008 14:43:35.915040 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c299418-f575-43a5-9a75-9fe358a8770c-config-data\") pod \"nova-cell0-conductor-db-sync-kfw4n\" (UID: \"9c299418-f575-43a5-9a75-9fe358a8770c\") " pod="openstack/nova-cell0-conductor-db-sync-kfw4n" Oct 08 14:43:35 crc kubenswrapper[4624]: I1008 14:43:35.915098 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c299418-f575-43a5-9a75-9fe358a8770c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kfw4n\" (UID: \"9c299418-f575-43a5-9a75-9fe358a8770c\") " pod="openstack/nova-cell0-conductor-db-sync-kfw4n" Oct 08 14:43:35 crc kubenswrapper[4624]: I1008 14:43:35.915168 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvtgw\" (UniqueName: \"kubernetes.io/projected/9c299418-f575-43a5-9a75-9fe358a8770c-kube-api-access-cvtgw\") pod \"nova-cell0-conductor-db-sync-kfw4n\" (UID: \"9c299418-f575-43a5-9a75-9fe358a8770c\") " pod="openstack/nova-cell0-conductor-db-sync-kfw4n" Oct 08 14:43:36 crc kubenswrapper[4624]: I1008 14:43:36.017242 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c299418-f575-43a5-9a75-9fe358a8770c-config-data\") pod \"nova-cell0-conductor-db-sync-kfw4n\" (UID: \"9c299418-f575-43a5-9a75-9fe358a8770c\") " pod="openstack/nova-cell0-conductor-db-sync-kfw4n" Oct 08 14:43:36 crc kubenswrapper[4624]: I1008 14:43:36.017526 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c299418-f575-43a5-9a75-9fe358a8770c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kfw4n\" (UID: \"9c299418-f575-43a5-9a75-9fe358a8770c\") " pod="openstack/nova-cell0-conductor-db-sync-kfw4n" Oct 08 14:43:36 crc kubenswrapper[4624]: I1008 14:43:36.017608 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvtgw\" (UniqueName: \"kubernetes.io/projected/9c299418-f575-43a5-9a75-9fe358a8770c-kube-api-access-cvtgw\") pod \"nova-cell0-conductor-db-sync-kfw4n\" (UID: \"9c299418-f575-43a5-9a75-9fe358a8770c\") " pod="openstack/nova-cell0-conductor-db-sync-kfw4n" Oct 08 14:43:36 crc kubenswrapper[4624]: I1008 14:43:36.017862 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c299418-f575-43a5-9a75-9fe358a8770c-scripts\") pod \"nova-cell0-conductor-db-sync-kfw4n\" (UID: \"9c299418-f575-43a5-9a75-9fe358a8770c\") " pod="openstack/nova-cell0-conductor-db-sync-kfw4n" Oct 08 14:43:36 crc kubenswrapper[4624]: I1008 14:43:36.024533 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c299418-f575-43a5-9a75-9fe358a8770c-scripts\") pod \"nova-cell0-conductor-db-sync-kfw4n\" (UID: \"9c299418-f575-43a5-9a75-9fe358a8770c\") " pod="openstack/nova-cell0-conductor-db-sync-kfw4n" Oct 08 14:43:36 crc kubenswrapper[4624]: I1008 14:43:36.024686 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c299418-f575-43a5-9a75-9fe358a8770c-config-data\") pod \"nova-cell0-conductor-db-sync-kfw4n\" (UID: \"9c299418-f575-43a5-9a75-9fe358a8770c\") " pod="openstack/nova-cell0-conductor-db-sync-kfw4n" Oct 08 14:43:36 crc kubenswrapper[4624]: I1008 14:43:36.025161 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c299418-f575-43a5-9a75-9fe358a8770c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kfw4n\" (UID: \"9c299418-f575-43a5-9a75-9fe358a8770c\") " pod="openstack/nova-cell0-conductor-db-sync-kfw4n" Oct 08 14:43:36 crc kubenswrapper[4624]: I1008 14:43:36.051400 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvtgw\" (UniqueName: \"kubernetes.io/projected/9c299418-f575-43a5-9a75-9fe358a8770c-kube-api-access-cvtgw\") pod \"nova-cell0-conductor-db-sync-kfw4n\" (UID: \"9c299418-f575-43a5-9a75-9fe358a8770c\") " pod="openstack/nova-cell0-conductor-db-sync-kfw4n" Oct 08 14:43:36 crc kubenswrapper[4624]: I1008 14:43:36.065111 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kfw4n" Oct 08 14:43:36 crc kubenswrapper[4624]: I1008 14:43:36.282548 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd470738-ff59-431b-85f3-3b79af71bb99","Type":"ContainerStarted","Data":"69d1898a9ea1578aae883196487db40a000e83aca56513050db53d69d4ab8ca3"} Oct 08 14:43:36 crc kubenswrapper[4624]: I1008 14:43:36.715199 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kfw4n"] Oct 08 14:43:37 crc kubenswrapper[4624]: I1008 14:43:37.306372 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kfw4n" event={"ID":"9c299418-f575-43a5-9a75-9fe358a8770c","Type":"ContainerStarted","Data":"74a408fb7c0255e30c82c55148b178cad7dd95357a1b273bafb3764fbb554ac3"} Oct 08 14:43:37 crc kubenswrapper[4624]: I1008 14:43:37.432995 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 08 14:43:37 crc kubenswrapper[4624]: I1008 14:43:37.433070 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 08 14:43:37 crc kubenswrapper[4624]: I1008 14:43:37.491371 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 08 14:43:37 crc kubenswrapper[4624]: I1008 14:43:37.542825 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 08 14:43:38 crc kubenswrapper[4624]: I1008 14:43:38.319120 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 08 14:43:38 crc kubenswrapper[4624]: I1008 14:43:38.319698 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 08 14:43:39 crc kubenswrapper[4624]: I1008 14:43:39.367992 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd470738-ff59-431b-85f3-3b79af71bb99","Type":"ContainerStarted","Data":"75d00cf92f6e03cf6c0475eab601cf1f199b2e886139eb9c39765c7f6b892ecc"} Oct 08 14:43:39 crc kubenswrapper[4624]: I1008 14:43:39.368358 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 14:43:39 crc kubenswrapper[4624]: I1008 14:43:39.395590 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.48029823 podStartE2EDuration="6.395569988s" podCreationTimestamp="2025-10-08 14:43:33 +0000 UTC" firstStartedPulling="2025-10-08 14:43:34.200359499 +0000 UTC m=+1239.351294576" lastFinishedPulling="2025-10-08 14:43:38.115631247 +0000 UTC m=+1243.266566334" observedRunningTime="2025-10-08 14:43:39.395292461 +0000 UTC m=+1244.546227538" watchObservedRunningTime="2025-10-08 14:43:39.395569988 +0000 UTC m=+1244.546505075" Oct 08 14:43:40 crc kubenswrapper[4624]: I1008 14:43:40.379755 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 14:43:40 crc kubenswrapper[4624]: I1008 14:43:40.380164 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 14:43:41 crc kubenswrapper[4624]: I1008 14:43:41.255725 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 08 14:43:41 crc kubenswrapper[4624]: I1008 14:43:41.257697 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 08 14:43:48 crc kubenswrapper[4624]: I1008 14:43:48.102937 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:48 crc kubenswrapper[4624]: I1008 14:43:48.103836 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd470738-ff59-431b-85f3-3b79af71bb99" containerName="ceilometer-central-agent" containerID="cri-o://9f2d1b9bac36d0119d093bd28bb4db190c6a563b1754624c1c141d22dcd4aa23" gracePeriod=30 Oct 08 14:43:48 crc kubenswrapper[4624]: I1008 14:43:48.104664 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd470738-ff59-431b-85f3-3b79af71bb99" containerName="proxy-httpd" containerID="cri-o://75d00cf92f6e03cf6c0475eab601cf1f199b2e886139eb9c39765c7f6b892ecc" gracePeriod=30 Oct 08 14:43:48 crc kubenswrapper[4624]: I1008 14:43:48.104750 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd470738-ff59-431b-85f3-3b79af71bb99" containerName="sg-core" containerID="cri-o://69d1898a9ea1578aae883196487db40a000e83aca56513050db53d69d4ab8ca3" gracePeriod=30 Oct 08 14:43:48 crc kubenswrapper[4624]: I1008 14:43:48.104806 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd470738-ff59-431b-85f3-3b79af71bb99" containerName="ceilometer-notification-agent" containerID="cri-o://c11efb289121acb0fdcda926c133ea8f9c8f2742d41e3b755347fc6d0efbd1e2" gracePeriod=30 Oct 08 14:43:48 crc kubenswrapper[4624]: I1008 14:43:48.128572 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fd470738-ff59-431b-85f3-3b79af71bb99" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.195:3000/\": EOF" Oct 08 14:43:48 crc kubenswrapper[4624]: I1008 14:43:48.468121 4624 generic.go:334] "Generic (PLEG): container finished" podID="fd470738-ff59-431b-85f3-3b79af71bb99" containerID="75d00cf92f6e03cf6c0475eab601cf1f199b2e886139eb9c39765c7f6b892ecc" exitCode=0 Oct 08 14:43:48 crc kubenswrapper[4624]: I1008 14:43:48.468159 4624 generic.go:334] "Generic (PLEG): container finished" podID="fd470738-ff59-431b-85f3-3b79af71bb99" containerID="69d1898a9ea1578aae883196487db40a000e83aca56513050db53d69d4ab8ca3" exitCode=2 Oct 08 14:43:48 crc kubenswrapper[4624]: I1008 14:43:48.468180 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd470738-ff59-431b-85f3-3b79af71bb99","Type":"ContainerDied","Data":"75d00cf92f6e03cf6c0475eab601cf1f199b2e886139eb9c39765c7f6b892ecc"} Oct 08 14:43:48 crc kubenswrapper[4624]: I1008 14:43:48.468206 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd470738-ff59-431b-85f3-3b79af71bb99","Type":"ContainerDied","Data":"69d1898a9ea1578aae883196487db40a000e83aca56513050db53d69d4ab8ca3"} Oct 08 14:43:49 crc kubenswrapper[4624]: I1008 14:43:49.482451 4624 generic.go:334] "Generic (PLEG): container finished" podID="fd470738-ff59-431b-85f3-3b79af71bb99" containerID="c11efb289121acb0fdcda926c133ea8f9c8f2742d41e3b755347fc6d0efbd1e2" exitCode=0 Oct 08 14:43:49 crc kubenswrapper[4624]: I1008 14:43:49.483235 4624 generic.go:334] "Generic (PLEG): container finished" podID="fd470738-ff59-431b-85f3-3b79af71bb99" containerID="9f2d1b9bac36d0119d093bd28bb4db190c6a563b1754624c1c141d22dcd4aa23" exitCode=0 Oct 08 14:43:49 crc kubenswrapper[4624]: I1008 14:43:49.482513 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd470738-ff59-431b-85f3-3b79af71bb99","Type":"ContainerDied","Data":"c11efb289121acb0fdcda926c133ea8f9c8f2742d41e3b755347fc6d0efbd1e2"} Oct 08 14:43:49 crc kubenswrapper[4624]: I1008 14:43:49.483388 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd470738-ff59-431b-85f3-3b79af71bb99","Type":"ContainerDied","Data":"9f2d1b9bac36d0119d093bd28bb4db190c6a563b1754624c1c141d22dcd4aa23"} Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.096452 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.153015 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd470738-ff59-431b-85f3-3b79af71bb99-log-httpd\") pod \"fd470738-ff59-431b-85f3-3b79af71bb99\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.153093 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-scripts\") pod \"fd470738-ff59-431b-85f3-3b79af71bb99\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.153195 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-ceilometer-tls-certs\") pod \"fd470738-ff59-431b-85f3-3b79af71bb99\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.153299 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsd57\" (UniqueName: \"kubernetes.io/projected/fd470738-ff59-431b-85f3-3b79af71bb99-kube-api-access-vsd57\") pod \"fd470738-ff59-431b-85f3-3b79af71bb99\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.153430 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-config-data\") pod \"fd470738-ff59-431b-85f3-3b79af71bb99\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.153461 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-sg-core-conf-yaml\") pod \"fd470738-ff59-431b-85f3-3b79af71bb99\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.153480 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd470738-ff59-431b-85f3-3b79af71bb99-run-httpd\") pod \"fd470738-ff59-431b-85f3-3b79af71bb99\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.153515 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-combined-ca-bundle\") pod \"fd470738-ff59-431b-85f3-3b79af71bb99\" (UID: \"fd470738-ff59-431b-85f3-3b79af71bb99\") " Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.155180 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd470738-ff59-431b-85f3-3b79af71bb99-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fd470738-ff59-431b-85f3-3b79af71bb99" (UID: "fd470738-ff59-431b-85f3-3b79af71bb99"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.155477 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd470738-ff59-431b-85f3-3b79af71bb99-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fd470738-ff59-431b-85f3-3b79af71bb99" (UID: "fd470738-ff59-431b-85f3-3b79af71bb99"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.167011 4624 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd470738-ff59-431b-85f3-3b79af71bb99-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.167041 4624 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd470738-ff59-431b-85f3-3b79af71bb99-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.168625 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd470738-ff59-431b-85f3-3b79af71bb99-kube-api-access-vsd57" (OuterVolumeSpecName: "kube-api-access-vsd57") pod "fd470738-ff59-431b-85f3-3b79af71bb99" (UID: "fd470738-ff59-431b-85f3-3b79af71bb99"). InnerVolumeSpecName "kube-api-access-vsd57". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.214945 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-scripts" (OuterVolumeSpecName: "scripts") pod "fd470738-ff59-431b-85f3-3b79af71bb99" (UID: "fd470738-ff59-431b-85f3-3b79af71bb99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.241059 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fd470738-ff59-431b-85f3-3b79af71bb99" (UID: "fd470738-ff59-431b-85f3-3b79af71bb99"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.270430 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsd57\" (UniqueName: \"kubernetes.io/projected/fd470738-ff59-431b-85f3-3b79af71bb99-kube-api-access-vsd57\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.270458 4624 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.270468 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.282370 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "fd470738-ff59-431b-85f3-3b79af71bb99" (UID: "fd470738-ff59-431b-85f3-3b79af71bb99"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.292885 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd470738-ff59-431b-85f3-3b79af71bb99" (UID: "fd470738-ff59-431b-85f3-3b79af71bb99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.303682 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-config-data" (OuterVolumeSpecName: "config-data") pod "fd470738-ff59-431b-85f3-3b79af71bb99" (UID: "fd470738-ff59-431b-85f3-3b79af71bb99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.372057 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.372089 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.372099 4624 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd470738-ff59-431b-85f3-3b79af71bb99-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.493556 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kfw4n" event={"ID":"9c299418-f575-43a5-9a75-9fe358a8770c","Type":"ContainerStarted","Data":"05cb4dbb677de3c19c6c631e66dde8b2e5fbfd3a072fc16231bd274e016aef9b"} Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.498067 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd470738-ff59-431b-85f3-3b79af71bb99","Type":"ContainerDied","Data":"42124a71c9119778d985021255b48ecc4848439bfedaa48f2ad60d0203053d00"} Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.498105 4624 scope.go:117] "RemoveContainer" containerID="75d00cf92f6e03cf6c0475eab601cf1f199b2e886139eb9c39765c7f6b892ecc" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.498235 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.513880 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-kfw4n" podStartSLOduration=2.589435082 podStartE2EDuration="15.513853663s" podCreationTimestamp="2025-10-08 14:43:35 +0000 UTC" firstStartedPulling="2025-10-08 14:43:36.721996379 +0000 UTC m=+1241.872931456" lastFinishedPulling="2025-10-08 14:43:49.64641497 +0000 UTC m=+1254.797350037" observedRunningTime="2025-10-08 14:43:50.512298944 +0000 UTC m=+1255.663234021" watchObservedRunningTime="2025-10-08 14:43:50.513853663 +0000 UTC m=+1255.664788740" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.523188 4624 scope.go:117] "RemoveContainer" containerID="69d1898a9ea1578aae883196487db40a000e83aca56513050db53d69d4ab8ca3" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.541905 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.552431 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.558374 4624 scope.go:117] "RemoveContainer" containerID="c11efb289121acb0fdcda926c133ea8f9c8f2742d41e3b755347fc6d0efbd1e2" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.608881 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:50 crc kubenswrapper[4624]: E1008 14:43:50.609441 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd470738-ff59-431b-85f3-3b79af71bb99" containerName="ceilometer-notification-agent" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.609464 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd470738-ff59-431b-85f3-3b79af71bb99" containerName="ceilometer-notification-agent" Oct 08 14:43:50 crc kubenswrapper[4624]: E1008 14:43:50.609488 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd470738-ff59-431b-85f3-3b79af71bb99" containerName="proxy-httpd" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.609496 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd470738-ff59-431b-85f3-3b79af71bb99" containerName="proxy-httpd" Oct 08 14:43:50 crc kubenswrapper[4624]: E1008 14:43:50.609511 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd470738-ff59-431b-85f3-3b79af71bb99" containerName="ceilometer-central-agent" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.609519 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd470738-ff59-431b-85f3-3b79af71bb99" containerName="ceilometer-central-agent" Oct 08 14:43:50 crc kubenswrapper[4624]: E1008 14:43:50.609549 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd470738-ff59-431b-85f3-3b79af71bb99" containerName="sg-core" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.609557 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd470738-ff59-431b-85f3-3b79af71bb99" containerName="sg-core" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.609808 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd470738-ff59-431b-85f3-3b79af71bb99" containerName="ceilometer-central-agent" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.609835 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd470738-ff59-431b-85f3-3b79af71bb99" containerName="ceilometer-notification-agent" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.609843 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd470738-ff59-431b-85f3-3b79af71bb99" containerName="sg-core" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.609854 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd470738-ff59-431b-85f3-3b79af71bb99" containerName="proxy-httpd" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.611813 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.617790 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.618014 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.622348 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.642065 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.675075 4624 scope.go:117] "RemoveContainer" containerID="9f2d1b9bac36d0119d093bd28bb4db190c6a563b1754624c1c141d22dcd4aa23" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.679581 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-run-httpd\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.679650 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-scripts\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.679679 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vcmz\" (UniqueName: \"kubernetes.io/projected/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-kube-api-access-6vcmz\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.679699 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.679770 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-config-data\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.679813 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.679842 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-log-httpd\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.679876 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.781743 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.781812 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-log-httpd\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.781855 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.781903 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-scripts\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.781928 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-run-httpd\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.781958 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vcmz\" (UniqueName: \"kubernetes.io/projected/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-kube-api-access-6vcmz\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.781983 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.782074 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-config-data\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.783718 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-run-httpd\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.787846 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-log-httpd\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.788259 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-scripts\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.788764 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-config-data\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.792926 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.797920 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.801351 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.803736 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vcmz\" (UniqueName: \"kubernetes.io/projected/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-kube-api-access-6vcmz\") pod \"ceilometer-0\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " pod="openstack/ceilometer-0" Oct 08 14:43:50 crc kubenswrapper[4624]: I1008 14:43:50.954081 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:43:51 crc kubenswrapper[4624]: I1008 14:43:51.425586 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:43:51 crc kubenswrapper[4624]: W1008 14:43:51.429158 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66eac5cd_4f0a_44cf_b7cb_87eba2a6ba70.slice/crio-c8d00fc19fbff6f1e187a86e260c50286bc3c6ce37c19d9ff1d8bbe973c91912 WatchSource:0}: Error finding container c8d00fc19fbff6f1e187a86e260c50286bc3c6ce37c19d9ff1d8bbe973c91912: Status 404 returned error can't find the container with id c8d00fc19fbff6f1e187a86e260c50286bc3c6ce37c19d9ff1d8bbe973c91912 Oct 08 14:43:51 crc kubenswrapper[4624]: I1008 14:43:51.476688 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd470738-ff59-431b-85f3-3b79af71bb99" path="/var/lib/kubelet/pods/fd470738-ff59-431b-85f3-3b79af71bb99/volumes" Oct 08 14:43:51 crc kubenswrapper[4624]: I1008 14:43:51.510688 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70","Type":"ContainerStarted","Data":"c8d00fc19fbff6f1e187a86e260c50286bc3c6ce37c19d9ff1d8bbe973c91912"} Oct 08 14:43:52 crc kubenswrapper[4624]: I1008 14:43:52.540136 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70","Type":"ContainerStarted","Data":"21fd734c97324a588d63a260b011444f31cf77724bd1f9b9ef909ee4bfbfc84b"} Oct 08 14:43:52 crc kubenswrapper[4624]: I1008 14:43:52.540755 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70","Type":"ContainerStarted","Data":"a350497398efe8c88f768c14d0d905db2b366c9f08bb542e911590a213593fbc"} Oct 08 14:43:53 crc kubenswrapper[4624]: I1008 14:43:53.551119 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70","Type":"ContainerStarted","Data":"7f33f93eacae21a145a5a408d0cb182b05aa4359644b09f35bc7c1a3ce2b14f3"} Oct 08 14:43:54 crc kubenswrapper[4624]: I1008 14:43:54.562816 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70","Type":"ContainerStarted","Data":"427e256e9c0e718eb40dd73039ce992125367541e12bf9a2f9ecb9a86cf04c81"} Oct 08 14:43:54 crc kubenswrapper[4624]: I1008 14:43:54.564709 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 14:43:54 crc kubenswrapper[4624]: I1008 14:43:54.590403 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.990577966 podStartE2EDuration="4.590380176s" podCreationTimestamp="2025-10-08 14:43:50 +0000 UTC" firstStartedPulling="2025-10-08 14:43:51.431783341 +0000 UTC m=+1256.582718418" lastFinishedPulling="2025-10-08 14:43:54.031585551 +0000 UTC m=+1259.182520628" observedRunningTime="2025-10-08 14:43:54.583596915 +0000 UTC m=+1259.734531992" watchObservedRunningTime="2025-10-08 14:43:54.590380176 +0000 UTC m=+1259.741315243" Oct 08 14:43:55 crc kubenswrapper[4624]: I1008 14:43:55.579878 4624 generic.go:334] "Generic (PLEG): container finished" podID="4b203558-1aea-4672-871f-d2dca324a585" containerID="2adb37d7e1e3ae4257d4557e08b6dab44a21981eeeec3209e1f7922febff5e54" exitCode=137 Oct 08 14:43:55 crc kubenswrapper[4624]: I1008 14:43:55.579934 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6cd65c74-7vqb5" event={"ID":"4b203558-1aea-4672-871f-d2dca324a585","Type":"ContainerDied","Data":"2adb37d7e1e3ae4257d4557e08b6dab44a21981eeeec3209e1f7922febff5e54"} Oct 08 14:43:55 crc kubenswrapper[4624]: I1008 14:43:55.580331 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6cd65c74-7vqb5" event={"ID":"4b203558-1aea-4672-871f-d2dca324a585","Type":"ContainerStarted","Data":"d14385dae5a640de6514fa96bdea43d5442497114f44ba80cb5f671931df8294"} Oct 08 14:43:55 crc kubenswrapper[4624]: I1008 14:43:55.580381 4624 scope.go:117] "RemoveContainer" containerID="6ec33217cf79e738ccd4fb8b5dfdb26af9d5223b52e13ca9693c35de2207761b" Oct 08 14:43:55 crc kubenswrapper[4624]: I1008 14:43:55.584484 4624 generic.go:334] "Generic (PLEG): container finished" podID="378be2ad-3335-409f-b2eb-60b3997ed4f8" containerID="da715b96eb2640f5105e03011f83947a4366c774c3377e422913cdf2a5cee143" exitCode=137 Oct 08 14:43:55 crc kubenswrapper[4624]: I1008 14:43:55.584554 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67f45f8444-g8bbs" event={"ID":"378be2ad-3335-409f-b2eb-60b3997ed4f8","Type":"ContainerDied","Data":"da715b96eb2640f5105e03011f83947a4366c774c3377e422913cdf2a5cee143"} Oct 08 14:43:55 crc kubenswrapper[4624]: I1008 14:43:55.584589 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67f45f8444-g8bbs" event={"ID":"378be2ad-3335-409f-b2eb-60b3997ed4f8","Type":"ContainerStarted","Data":"f8e907db545f6d503bd46e3c1cc283c3b4ab887adbbcbe0da28a7e90c621af36"} Oct 08 14:43:55 crc kubenswrapper[4624]: I1008 14:43:55.822775 4624 scope.go:117] "RemoveContainer" containerID="f9f47f6fddd5455b79c7a8df01c2e3793521a20e5f22d936d7fe100acfb88684" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.330297 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7ccc99c6fd-fx7s5" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.336676 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-bbd685659-f7cgg" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.478145 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-combined-ca-bundle\") pod \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\" (UID: \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\") " Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.478202 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-config-data-custom\") pod \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\" (UID: \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\") " Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.478309 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-config-data\") pod \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\" (UID: \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\") " Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.478329 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-config-data\") pod \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\" (UID: \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\") " Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.478348 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8lj7\" (UniqueName: \"kubernetes.io/projected/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-kube-api-access-b8lj7\") pod \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\" (UID: \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\") " Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.478373 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-combined-ca-bundle\") pod \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\" (UID: \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\") " Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.478455 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-config-data-custom\") pod \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\" (UID: \"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce\") " Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.478516 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9954x\" (UniqueName: \"kubernetes.io/projected/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-kube-api-access-9954x\") pod \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\" (UID: \"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80\") " Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.486527 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-kube-api-access-b8lj7" (OuterVolumeSpecName: "kube-api-access-b8lj7") pod "e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce" (UID: "e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce"). InnerVolumeSpecName "kube-api-access-b8lj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.488417 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80" (UID: "bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.488623 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-kube-api-access-9954x" (OuterVolumeSpecName: "kube-api-access-9954x") pod "bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80" (UID: "bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80"). InnerVolumeSpecName "kube-api-access-9954x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.493199 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce" (UID: "e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.523606 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce" (UID: "e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.526858 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80" (UID: "bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.555398 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-config-data" (OuterVolumeSpecName: "config-data") pod "e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce" (UID: "e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.557670 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-config-data" (OuterVolumeSpecName: "config-data") pod "bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80" (UID: "bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.580902 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.580940 4624 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.580949 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.580958 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.580966 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8lj7\" (UniqueName: \"kubernetes.io/projected/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-kube-api-access-b8lj7\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.580979 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.580986 4624 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.580995 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9954x\" (UniqueName: \"kubernetes.io/projected/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80-kube-api-access-9954x\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.639921 4624 generic.go:334] "Generic (PLEG): container finished" podID="e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce" containerID="dc34541c4b7a650c6ba8f5fcd97f00da04a212bf703cc5e1fb3dd27701cff9f0" exitCode=137 Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.639987 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-bbd685659-f7cgg" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.640005 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-bbd685659-f7cgg" event={"ID":"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce","Type":"ContainerDied","Data":"dc34541c4b7a650c6ba8f5fcd97f00da04a212bf703cc5e1fb3dd27701cff9f0"} Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.640494 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-bbd685659-f7cgg" event={"ID":"e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce","Type":"ContainerDied","Data":"bc4beffb46ea49d66b88aebd49e45f5cfae3d80eceed9d5b495d5ac6f35b37e3"} Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.640516 4624 scope.go:117] "RemoveContainer" containerID="dc34541c4b7a650c6ba8f5fcd97f00da04a212bf703cc5e1fb3dd27701cff9f0" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.642247 4624 generic.go:334] "Generic (PLEG): container finished" podID="bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80" containerID="41f13ee66d49ac4b081eb7ec0d2a83fe70dce222c95cd44c98214a4d4d7e7b16" exitCode=137 Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.642283 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7ccc99c6fd-fx7s5" event={"ID":"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80","Type":"ContainerDied","Data":"41f13ee66d49ac4b081eb7ec0d2a83fe70dce222c95cd44c98214a4d4d7e7b16"} Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.642300 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7ccc99c6fd-fx7s5" event={"ID":"bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80","Type":"ContainerDied","Data":"f89dadfb7b5547d9863418f0ed6750b37b871ced2d7a5ddbf1869cea60cb1f29"} Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.642350 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7ccc99c6fd-fx7s5" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.691075 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7ccc99c6fd-fx7s5"] Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.694586 4624 scope.go:117] "RemoveContainer" containerID="dc34541c4b7a650c6ba8f5fcd97f00da04a212bf703cc5e1fb3dd27701cff9f0" Oct 08 14:44:00 crc kubenswrapper[4624]: E1008 14:44:00.696766 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc34541c4b7a650c6ba8f5fcd97f00da04a212bf703cc5e1fb3dd27701cff9f0\": container with ID starting with dc34541c4b7a650c6ba8f5fcd97f00da04a212bf703cc5e1fb3dd27701cff9f0 not found: ID does not exist" containerID="dc34541c4b7a650c6ba8f5fcd97f00da04a212bf703cc5e1fb3dd27701cff9f0" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.696806 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc34541c4b7a650c6ba8f5fcd97f00da04a212bf703cc5e1fb3dd27701cff9f0"} err="failed to get container status \"dc34541c4b7a650c6ba8f5fcd97f00da04a212bf703cc5e1fb3dd27701cff9f0\": rpc error: code = NotFound desc = could not find container \"dc34541c4b7a650c6ba8f5fcd97f00da04a212bf703cc5e1fb3dd27701cff9f0\": container with ID starting with dc34541c4b7a650c6ba8f5fcd97f00da04a212bf703cc5e1fb3dd27701cff9f0 not found: ID does not exist" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.696828 4624 scope.go:117] "RemoveContainer" containerID="41f13ee66d49ac4b081eb7ec0d2a83fe70dce222c95cd44c98214a4d4d7e7b16" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.702307 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7ccc99c6fd-fx7s5"] Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.711218 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-bbd685659-f7cgg"] Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.720465 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-bbd685659-f7cgg"] Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.759316 4624 scope.go:117] "RemoveContainer" containerID="41f13ee66d49ac4b081eb7ec0d2a83fe70dce222c95cd44c98214a4d4d7e7b16" Oct 08 14:44:00 crc kubenswrapper[4624]: E1008 14:44:00.760134 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41f13ee66d49ac4b081eb7ec0d2a83fe70dce222c95cd44c98214a4d4d7e7b16\": container with ID starting with 41f13ee66d49ac4b081eb7ec0d2a83fe70dce222c95cd44c98214a4d4d7e7b16 not found: ID does not exist" containerID="41f13ee66d49ac4b081eb7ec0d2a83fe70dce222c95cd44c98214a4d4d7e7b16" Oct 08 14:44:00 crc kubenswrapper[4624]: I1008 14:44:00.760168 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f13ee66d49ac4b081eb7ec0d2a83fe70dce222c95cd44c98214a4d4d7e7b16"} err="failed to get container status \"41f13ee66d49ac4b081eb7ec0d2a83fe70dce222c95cd44c98214a4d4d7e7b16\": rpc error: code = NotFound desc = could not find container \"41f13ee66d49ac4b081eb7ec0d2a83fe70dce222c95cd44c98214a4d4d7e7b16\": container with ID starting with 41f13ee66d49ac4b081eb7ec0d2a83fe70dce222c95cd44c98214a4d4d7e7b16 not found: ID does not exist" Oct 08 14:44:01 crc kubenswrapper[4624]: I1008 14:44:01.477777 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80" path="/var/lib/kubelet/pods/bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80/volumes" Oct 08 14:44:01 crc kubenswrapper[4624]: I1008 14:44:01.479404 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce" path="/var/lib/kubelet/pods/e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce/volumes" Oct 08 14:44:04 crc kubenswrapper[4624]: I1008 14:44:04.611893 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:44:04 crc kubenswrapper[4624]: I1008 14:44:04.612776 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:44:04 crc kubenswrapper[4624]: I1008 14:44:04.612969 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f6cd65c74-7vqb5" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Oct 08 14:44:04 crc kubenswrapper[4624]: I1008 14:44:04.719742 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:44:04 crc kubenswrapper[4624]: I1008 14:44:04.719816 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:44:04 crc kubenswrapper[4624]: I1008 14:44:04.722323 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67f45f8444-g8bbs" podUID="378be2ad-3335-409f-b2eb-60b3997ed4f8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Oct 08 14:44:05 crc kubenswrapper[4624]: I1008 14:44:05.700468 4624 generic.go:334] "Generic (PLEG): container finished" podID="9c299418-f575-43a5-9a75-9fe358a8770c" containerID="05cb4dbb677de3c19c6c631e66dde8b2e5fbfd3a072fc16231bd274e016aef9b" exitCode=0 Oct 08 14:44:05 crc kubenswrapper[4624]: I1008 14:44:05.700513 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kfw4n" event={"ID":"9c299418-f575-43a5-9a75-9fe358a8770c","Type":"ContainerDied","Data":"05cb4dbb677de3c19c6c631e66dde8b2e5fbfd3a072fc16231bd274e016aef9b"} Oct 08 14:44:06 crc kubenswrapper[4624]: I1008 14:44:06.114258 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:44:06 crc kubenswrapper[4624]: I1008 14:44:06.114888 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerName="ceilometer-central-agent" containerID="cri-o://a350497398efe8c88f768c14d0d905db2b366c9f08bb542e911590a213593fbc" gracePeriod=30 Oct 08 14:44:06 crc kubenswrapper[4624]: I1008 14:44:06.115034 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerName="proxy-httpd" containerID="cri-o://427e256e9c0e718eb40dd73039ce992125367541e12bf9a2f9ecb9a86cf04c81" gracePeriod=30 Oct 08 14:44:06 crc kubenswrapper[4624]: I1008 14:44:06.115088 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerName="sg-core" containerID="cri-o://7f33f93eacae21a145a5a408d0cb182b05aa4359644b09f35bc7c1a3ce2b14f3" gracePeriod=30 Oct 08 14:44:06 crc kubenswrapper[4624]: I1008 14:44:06.115125 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerName="ceilometer-notification-agent" containerID="cri-o://21fd734c97324a588d63a260b011444f31cf77724bd1f9b9ef909ee4bfbfc84b" gracePeriod=30 Oct 08 14:44:06 crc kubenswrapper[4624]: I1008 14:44:06.130259 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.197:3000/\": read tcp 10.217.0.2:48092->10.217.0.197:3000: read: connection reset by peer" Oct 08 14:44:06 crc kubenswrapper[4624]: I1008 14:44:06.723040 4624 generic.go:334] "Generic (PLEG): container finished" podID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerID="427e256e9c0e718eb40dd73039ce992125367541e12bf9a2f9ecb9a86cf04c81" exitCode=0 Oct 08 14:44:06 crc kubenswrapper[4624]: I1008 14:44:06.723075 4624 generic.go:334] "Generic (PLEG): container finished" podID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerID="7f33f93eacae21a145a5a408d0cb182b05aa4359644b09f35bc7c1a3ce2b14f3" exitCode=2 Oct 08 14:44:06 crc kubenswrapper[4624]: I1008 14:44:06.723082 4624 generic.go:334] "Generic (PLEG): container finished" podID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerID="a350497398efe8c88f768c14d0d905db2b366c9f08bb542e911590a213593fbc" exitCode=0 Oct 08 14:44:06 crc kubenswrapper[4624]: I1008 14:44:06.723114 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70","Type":"ContainerDied","Data":"427e256e9c0e718eb40dd73039ce992125367541e12bf9a2f9ecb9a86cf04c81"} Oct 08 14:44:06 crc kubenswrapper[4624]: I1008 14:44:06.723182 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70","Type":"ContainerDied","Data":"7f33f93eacae21a145a5a408d0cb182b05aa4359644b09f35bc7c1a3ce2b14f3"} Oct 08 14:44:06 crc kubenswrapper[4624]: I1008 14:44:06.723197 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70","Type":"ContainerDied","Data":"a350497398efe8c88f768c14d0d905db2b366c9f08bb542e911590a213593fbc"} Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.046816 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kfw4n" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.096158 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvtgw\" (UniqueName: \"kubernetes.io/projected/9c299418-f575-43a5-9a75-9fe358a8770c-kube-api-access-cvtgw\") pod \"9c299418-f575-43a5-9a75-9fe358a8770c\" (UID: \"9c299418-f575-43a5-9a75-9fe358a8770c\") " Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.096340 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c299418-f575-43a5-9a75-9fe358a8770c-config-data\") pod \"9c299418-f575-43a5-9a75-9fe358a8770c\" (UID: \"9c299418-f575-43a5-9a75-9fe358a8770c\") " Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.096406 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c299418-f575-43a5-9a75-9fe358a8770c-combined-ca-bundle\") pod \"9c299418-f575-43a5-9a75-9fe358a8770c\" (UID: \"9c299418-f575-43a5-9a75-9fe358a8770c\") " Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.096462 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c299418-f575-43a5-9a75-9fe358a8770c-scripts\") pod \"9c299418-f575-43a5-9a75-9fe358a8770c\" (UID: \"9c299418-f575-43a5-9a75-9fe358a8770c\") " Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.103853 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c299418-f575-43a5-9a75-9fe358a8770c-scripts" (OuterVolumeSpecName: "scripts") pod "9c299418-f575-43a5-9a75-9fe358a8770c" (UID: "9c299418-f575-43a5-9a75-9fe358a8770c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.113351 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c299418-f575-43a5-9a75-9fe358a8770c-kube-api-access-cvtgw" (OuterVolumeSpecName: "kube-api-access-cvtgw") pod "9c299418-f575-43a5-9a75-9fe358a8770c" (UID: "9c299418-f575-43a5-9a75-9fe358a8770c"). InnerVolumeSpecName "kube-api-access-cvtgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.144311 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c299418-f575-43a5-9a75-9fe358a8770c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c299418-f575-43a5-9a75-9fe358a8770c" (UID: "9c299418-f575-43a5-9a75-9fe358a8770c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.150028 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c299418-f575-43a5-9a75-9fe358a8770c-config-data" (OuterVolumeSpecName: "config-data") pod "9c299418-f575-43a5-9a75-9fe358a8770c" (UID: "9c299418-f575-43a5-9a75-9fe358a8770c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.198230 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c299418-f575-43a5-9a75-9fe358a8770c-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.198262 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvtgw\" (UniqueName: \"kubernetes.io/projected/9c299418-f575-43a5-9a75-9fe358a8770c-kube-api-access-cvtgw\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.198274 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c299418-f575-43a5-9a75-9fe358a8770c-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.198285 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c299418-f575-43a5-9a75-9fe358a8770c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.732240 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kfw4n" event={"ID":"9c299418-f575-43a5-9a75-9fe358a8770c","Type":"ContainerDied","Data":"74a408fb7c0255e30c82c55148b178cad7dd95357a1b273bafb3764fbb554ac3"} Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.732517 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74a408fb7c0255e30c82c55148b178cad7dd95357a1b273bafb3764fbb554ac3" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.732570 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kfw4n" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.903528 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 14:44:07 crc kubenswrapper[4624]: E1008 14:44:07.904702 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80" containerName="heat-api" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.904837 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80" containerName="heat-api" Oct 08 14:44:07 crc kubenswrapper[4624]: E1008 14:44:07.904978 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce" containerName="heat-cfnapi" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.905057 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce" containerName="heat-cfnapi" Oct 08 14:44:07 crc kubenswrapper[4624]: E1008 14:44:07.905183 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c299418-f575-43a5-9a75-9fe358a8770c" containerName="nova-cell0-conductor-db-sync" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.905317 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c299418-f575-43a5-9a75-9fe358a8770c" containerName="nova-cell0-conductor-db-sync" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.906001 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca6d5cb-cd22-4c21-a1d3-7ad4cadf6e80" containerName="heat-api" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.906290 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2fd4815-5e5f-4f1c-bce6-72a6fd0b18ce" containerName="heat-cfnapi" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.906448 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c299418-f575-43a5-9a75-9fe358a8770c" containerName="nova-cell0-conductor-db-sync" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.911113 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.919758 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.919937 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-26sdn" Oct 08 14:44:07 crc kubenswrapper[4624]: I1008 14:44:07.924292 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 14:44:08 crc kubenswrapper[4624]: I1008 14:44:08.034346 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6\") " pod="openstack/nova-cell0-conductor-0" Oct 08 14:44:08 crc kubenswrapper[4624]: I1008 14:44:08.034584 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwz5n\" (UniqueName: \"kubernetes.io/projected/cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6-kube-api-access-kwz5n\") pod \"nova-cell0-conductor-0\" (UID: \"cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6\") " pod="openstack/nova-cell0-conductor-0" Oct 08 14:44:08 crc kubenswrapper[4624]: I1008 14:44:08.034710 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6\") " pod="openstack/nova-cell0-conductor-0" Oct 08 14:44:08 crc kubenswrapper[4624]: I1008 14:44:08.137204 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwz5n\" (UniqueName: \"kubernetes.io/projected/cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6-kube-api-access-kwz5n\") pod \"nova-cell0-conductor-0\" (UID: \"cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6\") " pod="openstack/nova-cell0-conductor-0" Oct 08 14:44:08 crc kubenswrapper[4624]: I1008 14:44:08.137314 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6\") " pod="openstack/nova-cell0-conductor-0" Oct 08 14:44:08 crc kubenswrapper[4624]: I1008 14:44:08.137362 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6\") " pod="openstack/nova-cell0-conductor-0" Oct 08 14:44:08 crc kubenswrapper[4624]: I1008 14:44:08.143434 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6\") " pod="openstack/nova-cell0-conductor-0" Oct 08 14:44:08 crc kubenswrapper[4624]: I1008 14:44:08.160140 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6\") " pod="openstack/nova-cell0-conductor-0" Oct 08 14:44:08 crc kubenswrapper[4624]: I1008 14:44:08.168105 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwz5n\" (UniqueName: \"kubernetes.io/projected/cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6-kube-api-access-kwz5n\") pod \"nova-cell0-conductor-0\" (UID: \"cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6\") " pod="openstack/nova-cell0-conductor-0" Oct 08 14:44:08 crc kubenswrapper[4624]: I1008 14:44:08.253133 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 08 14:44:08 crc kubenswrapper[4624]: I1008 14:44:08.772351 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 14:44:08 crc kubenswrapper[4624]: W1008 14:44:08.786509 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd75368d_b8a0_41e9_8e92_f5bcc7e91fb6.slice/crio-21b05066ae6498473266ad8c9b6370883a7a0cb3d69f70ca037dfd113895a46d WatchSource:0}: Error finding container 21b05066ae6498473266ad8c9b6370883a7a0cb3d69f70ca037dfd113895a46d: Status 404 returned error can't find the container with id 21b05066ae6498473266ad8c9b6370883a7a0cb3d69f70ca037dfd113895a46d Oct 08 14:44:09 crc kubenswrapper[4624]: I1008 14:44:09.748700 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6","Type":"ContainerStarted","Data":"51e15059007a4f4d6d2f8c2e635c193be59b050f976078668a41bc98af3014e4"} Oct 08 14:44:09 crc kubenswrapper[4624]: I1008 14:44:09.749023 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6","Type":"ContainerStarted","Data":"21b05066ae6498473266ad8c9b6370883a7a0cb3d69f70ca037dfd113895a46d"} Oct 08 14:44:09 crc kubenswrapper[4624]: I1008 14:44:09.749041 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 08 14:44:09 crc kubenswrapper[4624]: I1008 14:44:09.767678 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.767661794 podStartE2EDuration="2.767661794s" podCreationTimestamp="2025-10-08 14:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:44:09.761071808 +0000 UTC m=+1274.912006885" watchObservedRunningTime="2025-10-08 14:44:09.767661794 +0000 UTC m=+1274.918596871" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.375653 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.486179 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-sg-core-conf-yaml\") pod \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.486215 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-log-httpd\") pod \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.486255 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-combined-ca-bundle\") pod \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.486272 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-run-httpd\") pod \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.486303 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-config-data\") pod \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.486348 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-ceilometer-tls-certs\") pod \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.486428 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vcmz\" (UniqueName: \"kubernetes.io/projected/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-kube-api-access-6vcmz\") pod \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.486478 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-scripts\") pod \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\" (UID: \"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70\") " Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.487311 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" (UID: "66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.490104 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" (UID: "66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.556855 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-scripts" (OuterVolumeSpecName: "scripts") pod "66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" (UID: "66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.556978 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-kube-api-access-6vcmz" (OuterVolumeSpecName: "kube-api-access-6vcmz") pod "66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" (UID: "66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70"). InnerVolumeSpecName "kube-api-access-6vcmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.590327 4624 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.590367 4624 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.590380 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vcmz\" (UniqueName: \"kubernetes.io/projected/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-kube-api-access-6vcmz\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.590397 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.606748 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" (UID: "66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.635777 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" (UID: "66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.657685 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" (UID: "66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.692103 4624 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.692145 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.692162 4624 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.697184 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-config-data" (OuterVolumeSpecName: "config-data") pod "66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" (UID: "66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.763799 4624 generic.go:334] "Generic (PLEG): container finished" podID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerID="21fd734c97324a588d63a260b011444f31cf77724bd1f9b9ef909ee4bfbfc84b" exitCode=0 Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.763952 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70","Type":"ContainerDied","Data":"21fd734c97324a588d63a260b011444f31cf77724bd1f9b9ef909ee4bfbfc84b"} Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.764186 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70","Type":"ContainerDied","Data":"c8d00fc19fbff6f1e187a86e260c50286bc3c6ce37c19d9ff1d8bbe973c91912"} Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.764215 4624 scope.go:117] "RemoveContainer" containerID="427e256e9c0e718eb40dd73039ce992125367541e12bf9a2f9ecb9a86cf04c81" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.764041 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.804061 4624 scope.go:117] "RemoveContainer" containerID="7f33f93eacae21a145a5a408d0cb182b05aa4359644b09f35bc7c1a3ce2b14f3" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.806986 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.808088 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.818416 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.844947 4624 scope.go:117] "RemoveContainer" containerID="21fd734c97324a588d63a260b011444f31cf77724bd1f9b9ef909ee4bfbfc84b" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.863710 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:44:10 crc kubenswrapper[4624]: E1008 14:44:10.864236 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerName="proxy-httpd" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.864252 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerName="proxy-httpd" Oct 08 14:44:10 crc kubenswrapper[4624]: E1008 14:44:10.864274 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerName="ceilometer-central-agent" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.864285 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerName="ceilometer-central-agent" Oct 08 14:44:10 crc kubenswrapper[4624]: E1008 14:44:10.864316 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerName="ceilometer-notification-agent" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.864326 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerName="ceilometer-notification-agent" Oct 08 14:44:10 crc kubenswrapper[4624]: E1008 14:44:10.864340 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerName="sg-core" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.864347 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerName="sg-core" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.864558 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerName="ceilometer-central-agent" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.864587 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerName="sg-core" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.864608 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerName="ceilometer-notification-agent" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.864621 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" containerName="proxy-httpd" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.866658 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.870355 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.870583 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.870718 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.882551 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:44:10 crc kubenswrapper[4624]: I1008 14:44:10.915970 4624 scope.go:117] "RemoveContainer" containerID="a350497398efe8c88f768c14d0d905db2b366c9f08bb542e911590a213593fbc" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.007783 4624 scope.go:117] "RemoveContainer" containerID="427e256e9c0e718eb40dd73039ce992125367541e12bf9a2f9ecb9a86cf04c81" Oct 08 14:44:11 crc kubenswrapper[4624]: E1008 14:44:11.008206 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"427e256e9c0e718eb40dd73039ce992125367541e12bf9a2f9ecb9a86cf04c81\": container with ID starting with 427e256e9c0e718eb40dd73039ce992125367541e12bf9a2f9ecb9a86cf04c81 not found: ID does not exist" containerID="427e256e9c0e718eb40dd73039ce992125367541e12bf9a2f9ecb9a86cf04c81" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.008247 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427e256e9c0e718eb40dd73039ce992125367541e12bf9a2f9ecb9a86cf04c81"} err="failed to get container status \"427e256e9c0e718eb40dd73039ce992125367541e12bf9a2f9ecb9a86cf04c81\": rpc error: code = NotFound desc = could not find container \"427e256e9c0e718eb40dd73039ce992125367541e12bf9a2f9ecb9a86cf04c81\": container with ID starting with 427e256e9c0e718eb40dd73039ce992125367541e12bf9a2f9ecb9a86cf04c81 not found: ID does not exist" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.008280 4624 scope.go:117] "RemoveContainer" containerID="7f33f93eacae21a145a5a408d0cb182b05aa4359644b09f35bc7c1a3ce2b14f3" Oct 08 14:44:11 crc kubenswrapper[4624]: E1008 14:44:11.009030 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f33f93eacae21a145a5a408d0cb182b05aa4359644b09f35bc7c1a3ce2b14f3\": container with ID starting with 7f33f93eacae21a145a5a408d0cb182b05aa4359644b09f35bc7c1a3ce2b14f3 not found: ID does not exist" containerID="7f33f93eacae21a145a5a408d0cb182b05aa4359644b09f35bc7c1a3ce2b14f3" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.009067 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f33f93eacae21a145a5a408d0cb182b05aa4359644b09f35bc7c1a3ce2b14f3"} err="failed to get container status \"7f33f93eacae21a145a5a408d0cb182b05aa4359644b09f35bc7c1a3ce2b14f3\": rpc error: code = NotFound desc = could not find container \"7f33f93eacae21a145a5a408d0cb182b05aa4359644b09f35bc7c1a3ce2b14f3\": container with ID starting with 7f33f93eacae21a145a5a408d0cb182b05aa4359644b09f35bc7c1a3ce2b14f3 not found: ID does not exist" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.009091 4624 scope.go:117] "RemoveContainer" containerID="21fd734c97324a588d63a260b011444f31cf77724bd1f9b9ef909ee4bfbfc84b" Oct 08 14:44:11 crc kubenswrapper[4624]: E1008 14:44:11.009425 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21fd734c97324a588d63a260b011444f31cf77724bd1f9b9ef909ee4bfbfc84b\": container with ID starting with 21fd734c97324a588d63a260b011444f31cf77724bd1f9b9ef909ee4bfbfc84b not found: ID does not exist" containerID="21fd734c97324a588d63a260b011444f31cf77724bd1f9b9ef909ee4bfbfc84b" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.009468 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21fd734c97324a588d63a260b011444f31cf77724bd1f9b9ef909ee4bfbfc84b"} err="failed to get container status \"21fd734c97324a588d63a260b011444f31cf77724bd1f9b9ef909ee4bfbfc84b\": rpc error: code = NotFound desc = could not find container \"21fd734c97324a588d63a260b011444f31cf77724bd1f9b9ef909ee4bfbfc84b\": container with ID starting with 21fd734c97324a588d63a260b011444f31cf77724bd1f9b9ef909ee4bfbfc84b not found: ID does not exist" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.009510 4624 scope.go:117] "RemoveContainer" containerID="a350497398efe8c88f768c14d0d905db2b366c9f08bb542e911590a213593fbc" Oct 08 14:44:11 crc kubenswrapper[4624]: E1008 14:44:11.009865 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a350497398efe8c88f768c14d0d905db2b366c9f08bb542e911590a213593fbc\": container with ID starting with a350497398efe8c88f768c14d0d905db2b366c9f08bb542e911590a213593fbc not found: ID does not exist" containerID="a350497398efe8c88f768c14d0d905db2b366c9f08bb542e911590a213593fbc" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.009893 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a350497398efe8c88f768c14d0d905db2b366c9f08bb542e911590a213593fbc"} err="failed to get container status \"a350497398efe8c88f768c14d0d905db2b366c9f08bb542e911590a213593fbc\": rpc error: code = NotFound desc = could not find container \"a350497398efe8c88f768c14d0d905db2b366c9f08bb542e911590a213593fbc\": container with ID starting with a350497398efe8c88f768c14d0d905db2b366c9f08bb542e911590a213593fbc not found: ID does not exist" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.010497 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc13ba3-3410-4f96-be55-0424e8e74e2c-run-httpd\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.010567 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-scripts\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.010597 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.010687 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-config-data\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.010722 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg89s\" (UniqueName: \"kubernetes.io/projected/bbc13ba3-3410-4f96-be55-0424e8e74e2c-kube-api-access-vg89s\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.010757 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.011064 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.011122 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc13ba3-3410-4f96-be55-0424e8e74e2c-log-httpd\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.113267 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-scripts\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.113316 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.113357 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-config-data\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.113396 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg89s\" (UniqueName: \"kubernetes.io/projected/bbc13ba3-3410-4f96-be55-0424e8e74e2c-kube-api-access-vg89s\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.113440 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.113566 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.113596 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc13ba3-3410-4f96-be55-0424e8e74e2c-log-httpd\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.113697 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc13ba3-3410-4f96-be55-0424e8e74e2c-run-httpd\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.114067 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc13ba3-3410-4f96-be55-0424e8e74e2c-log-httpd\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.114285 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc13ba3-3410-4f96-be55-0424e8e74e2c-run-httpd\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.119676 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.124138 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.124397 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.124438 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-config-data\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.138357 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-scripts\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.151383 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg89s\" (UniqueName: \"kubernetes.io/projected/bbc13ba3-3410-4f96-be55-0424e8e74e2c-kube-api-access-vg89s\") pod \"ceilometer-0\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.208336 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.482767 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70" path="/var/lib/kubelet/pods/66eac5cd-4f0a-44cf-b7cb-87eba2a6ba70/volumes" Oct 08 14:44:11 crc kubenswrapper[4624]: I1008 14:44:11.804353 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:44:11 crc kubenswrapper[4624]: W1008 14:44:11.812041 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbc13ba3_3410_4f96_be55_0424e8e74e2c.slice/crio-5a40941d04525fce2c7350bf1b635d6110f13ae4260b8e987e3b163aece80899 WatchSource:0}: Error finding container 5a40941d04525fce2c7350bf1b635d6110f13ae4260b8e987e3b163aece80899: Status 404 returned error can't find the container with id 5a40941d04525fce2c7350bf1b635d6110f13ae4260b8e987e3b163aece80899 Oct 08 14:44:12 crc kubenswrapper[4624]: I1008 14:44:12.795849 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc13ba3-3410-4f96-be55-0424e8e74e2c","Type":"ContainerStarted","Data":"24f6fb667eb68c82e78b3977f15a548cae94696fabe85307b40b8a42b3bc08de"} Oct 08 14:44:12 crc kubenswrapper[4624]: I1008 14:44:12.796360 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc13ba3-3410-4f96-be55-0424e8e74e2c","Type":"ContainerStarted","Data":"b358a498261f5b79dc5d98dd1785ca7e90933b57d8f0e195276f5f3f1867a56c"} Oct 08 14:44:12 crc kubenswrapper[4624]: I1008 14:44:12.796370 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc13ba3-3410-4f96-be55-0424e8e74e2c","Type":"ContainerStarted","Data":"5a40941d04525fce2c7350bf1b635d6110f13ae4260b8e987e3b163aece80899"} Oct 08 14:44:14 crc kubenswrapper[4624]: I1008 14:44:14.048898 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:44:14 crc kubenswrapper[4624]: I1008 14:44:14.611577 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f6cd65c74-7vqb5" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Oct 08 14:44:14 crc kubenswrapper[4624]: I1008 14:44:14.720055 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67f45f8444-g8bbs" podUID="378be2ad-3335-409f-b2eb-60b3997ed4f8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Oct 08 14:44:14 crc kubenswrapper[4624]: I1008 14:44:14.819094 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc13ba3-3410-4f96-be55-0424e8e74e2c","Type":"ContainerStarted","Data":"eff2510520bb95847384264d2f520cf7372a12c7a0de764a7508a6ccdffc09b5"} Oct 08 14:44:15 crc kubenswrapper[4624]: I1008 14:44:15.860446 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc13ba3-3410-4f96-be55-0424e8e74e2c","Type":"ContainerStarted","Data":"282237c63eebff1d3e2db7b8eacbd58af6c82f4f290fa612f2f6e3a40a70de5d"} Oct 08 14:44:15 crc kubenswrapper[4624]: I1008 14:44:15.862860 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 14:44:15 crc kubenswrapper[4624]: I1008 14:44:15.861490 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerName="proxy-httpd" containerID="cri-o://282237c63eebff1d3e2db7b8eacbd58af6c82f4f290fa612f2f6e3a40a70de5d" gracePeriod=30 Oct 08 14:44:15 crc kubenswrapper[4624]: I1008 14:44:15.860823 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerName="ceilometer-central-agent" containerID="cri-o://b358a498261f5b79dc5d98dd1785ca7e90933b57d8f0e195276f5f3f1867a56c" gracePeriod=30 Oct 08 14:44:15 crc kubenswrapper[4624]: I1008 14:44:15.861522 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerName="ceilometer-notification-agent" containerID="cri-o://24f6fb667eb68c82e78b3977f15a548cae94696fabe85307b40b8a42b3bc08de" gracePeriod=30 Oct 08 14:44:15 crc kubenswrapper[4624]: I1008 14:44:15.861507 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerName="sg-core" containerID="cri-o://eff2510520bb95847384264d2f520cf7372a12c7a0de764a7508a6ccdffc09b5" gracePeriod=30 Oct 08 14:44:15 crc kubenswrapper[4624]: I1008 14:44:15.902603 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.632159576 podStartE2EDuration="5.902562779s" podCreationTimestamp="2025-10-08 14:44:10 +0000 UTC" firstStartedPulling="2025-10-08 14:44:11.815215534 +0000 UTC m=+1276.966150611" lastFinishedPulling="2025-10-08 14:44:15.085618727 +0000 UTC m=+1280.236553814" observedRunningTime="2025-10-08 14:44:15.897101581 +0000 UTC m=+1281.048036658" watchObservedRunningTime="2025-10-08 14:44:15.902562779 +0000 UTC m=+1281.053497856" Oct 08 14:44:16 crc kubenswrapper[4624]: I1008 14:44:16.870217 4624 generic.go:334] "Generic (PLEG): container finished" podID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerID="282237c63eebff1d3e2db7b8eacbd58af6c82f4f290fa612f2f6e3a40a70de5d" exitCode=0 Oct 08 14:44:16 crc kubenswrapper[4624]: I1008 14:44:16.870523 4624 generic.go:334] "Generic (PLEG): container finished" podID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerID="eff2510520bb95847384264d2f520cf7372a12c7a0de764a7508a6ccdffc09b5" exitCode=2 Oct 08 14:44:16 crc kubenswrapper[4624]: I1008 14:44:16.870535 4624 generic.go:334] "Generic (PLEG): container finished" podID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerID="24f6fb667eb68c82e78b3977f15a548cae94696fabe85307b40b8a42b3bc08de" exitCode=0 Oct 08 14:44:16 crc kubenswrapper[4624]: I1008 14:44:16.870290 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc13ba3-3410-4f96-be55-0424e8e74e2c","Type":"ContainerDied","Data":"282237c63eebff1d3e2db7b8eacbd58af6c82f4f290fa612f2f6e3a40a70de5d"} Oct 08 14:44:16 crc kubenswrapper[4624]: I1008 14:44:16.870571 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc13ba3-3410-4f96-be55-0424e8e74e2c","Type":"ContainerDied","Data":"eff2510520bb95847384264d2f520cf7372a12c7a0de764a7508a6ccdffc09b5"} Oct 08 14:44:16 crc kubenswrapper[4624]: I1008 14:44:16.870585 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc13ba3-3410-4f96-be55-0424e8e74e2c","Type":"ContainerDied","Data":"24f6fb667eb68c82e78b3977f15a548cae94696fabe85307b40b8a42b3bc08de"} Oct 08 14:44:18 crc kubenswrapper[4624]: I1008 14:44:18.281093 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 08 14:44:18 crc kubenswrapper[4624]: I1008 14:44:18.799701 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8rj4w"] Oct 08 14:44:18 crc kubenswrapper[4624]: I1008 14:44:18.801210 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8rj4w" Oct 08 14:44:18 crc kubenswrapper[4624]: I1008 14:44:18.807776 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 08 14:44:18 crc kubenswrapper[4624]: I1008 14:44:18.817396 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 08 14:44:18 crc kubenswrapper[4624]: I1008 14:44:18.867713 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae634848-a6be-4622-96b1-36c4f0427893-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8rj4w\" (UID: \"ae634848-a6be-4622-96b1-36c4f0427893\") " pod="openstack/nova-cell0-cell-mapping-8rj4w" Oct 08 14:44:18 crc kubenswrapper[4624]: I1008 14:44:18.867885 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae634848-a6be-4622-96b1-36c4f0427893-scripts\") pod \"nova-cell0-cell-mapping-8rj4w\" (UID: \"ae634848-a6be-4622-96b1-36c4f0427893\") " pod="openstack/nova-cell0-cell-mapping-8rj4w" Oct 08 14:44:18 crc kubenswrapper[4624]: I1008 14:44:18.867931 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsfl9\" (UniqueName: \"kubernetes.io/projected/ae634848-a6be-4622-96b1-36c4f0427893-kube-api-access-hsfl9\") pod \"nova-cell0-cell-mapping-8rj4w\" (UID: \"ae634848-a6be-4622-96b1-36c4f0427893\") " pod="openstack/nova-cell0-cell-mapping-8rj4w" Oct 08 14:44:18 crc kubenswrapper[4624]: I1008 14:44:18.867954 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae634848-a6be-4622-96b1-36c4f0427893-config-data\") pod \"nova-cell0-cell-mapping-8rj4w\" (UID: \"ae634848-a6be-4622-96b1-36c4f0427893\") " pod="openstack/nova-cell0-cell-mapping-8rj4w" Oct 08 14:44:18 crc kubenswrapper[4624]: I1008 14:44:18.881768 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8rj4w"] Oct 08 14:44:18 crc kubenswrapper[4624]: I1008 14:44:18.969624 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae634848-a6be-4622-96b1-36c4f0427893-scripts\") pod \"nova-cell0-cell-mapping-8rj4w\" (UID: \"ae634848-a6be-4622-96b1-36c4f0427893\") " pod="openstack/nova-cell0-cell-mapping-8rj4w" Oct 08 14:44:18 crc kubenswrapper[4624]: I1008 14:44:18.969705 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsfl9\" (UniqueName: \"kubernetes.io/projected/ae634848-a6be-4622-96b1-36c4f0427893-kube-api-access-hsfl9\") pod \"nova-cell0-cell-mapping-8rj4w\" (UID: \"ae634848-a6be-4622-96b1-36c4f0427893\") " pod="openstack/nova-cell0-cell-mapping-8rj4w" Oct 08 14:44:18 crc kubenswrapper[4624]: I1008 14:44:18.969733 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae634848-a6be-4622-96b1-36c4f0427893-config-data\") pod \"nova-cell0-cell-mapping-8rj4w\" (UID: \"ae634848-a6be-4622-96b1-36c4f0427893\") " pod="openstack/nova-cell0-cell-mapping-8rj4w" Oct 08 14:44:18 crc kubenswrapper[4624]: I1008 14:44:18.969874 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae634848-a6be-4622-96b1-36c4f0427893-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8rj4w\" (UID: \"ae634848-a6be-4622-96b1-36c4f0427893\") " pod="openstack/nova-cell0-cell-mapping-8rj4w" Oct 08 14:44:18 crc kubenswrapper[4624]: I1008 14:44:18.976341 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae634848-a6be-4622-96b1-36c4f0427893-scripts\") pod \"nova-cell0-cell-mapping-8rj4w\" (UID: \"ae634848-a6be-4622-96b1-36c4f0427893\") " pod="openstack/nova-cell0-cell-mapping-8rj4w" Oct 08 14:44:18 crc kubenswrapper[4624]: I1008 14:44:18.979694 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae634848-a6be-4622-96b1-36c4f0427893-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8rj4w\" (UID: \"ae634848-a6be-4622-96b1-36c4f0427893\") " pod="openstack/nova-cell0-cell-mapping-8rj4w" Oct 08 14:44:18 crc kubenswrapper[4624]: I1008 14:44:18.980487 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae634848-a6be-4622-96b1-36c4f0427893-config-data\") pod \"nova-cell0-cell-mapping-8rj4w\" (UID: \"ae634848-a6be-4622-96b1-36c4f0427893\") " pod="openstack/nova-cell0-cell-mapping-8rj4w" Oct 08 14:44:18 crc kubenswrapper[4624]: I1008 14:44:18.997650 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsfl9\" (UniqueName: \"kubernetes.io/projected/ae634848-a6be-4622-96b1-36c4f0427893-kube-api-access-hsfl9\") pod \"nova-cell0-cell-mapping-8rj4w\" (UID: \"ae634848-a6be-4622-96b1-36c4f0427893\") " pod="openstack/nova-cell0-cell-mapping-8rj4w" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.067658 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.069595 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.073029 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.118501 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8rj4w" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.182975 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4b4833-604d-4011-a7b6-668f7116028b-config-data\") pod \"nova-metadata-0\" (UID: \"bb4b4833-604d-4011-a7b6-668f7116028b\") " pod="openstack/nova-metadata-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.183738 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9n7g\" (UniqueName: \"kubernetes.io/projected/bb4b4833-604d-4011-a7b6-668f7116028b-kube-api-access-l9n7g\") pod \"nova-metadata-0\" (UID: \"bb4b4833-604d-4011-a7b6-668f7116028b\") " pod="openstack/nova-metadata-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.183808 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4b4833-604d-4011-a7b6-668f7116028b-logs\") pod \"nova-metadata-0\" (UID: \"bb4b4833-604d-4011-a7b6-668f7116028b\") " pod="openstack/nova-metadata-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.183881 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4b4833-604d-4011-a7b6-668f7116028b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb4b4833-604d-4011-a7b6-668f7116028b\") " pod="openstack/nova-metadata-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.189769 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.289686 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.291178 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9n7g\" (UniqueName: \"kubernetes.io/projected/bb4b4833-604d-4011-a7b6-668f7116028b-kube-api-access-l9n7g\") pod \"nova-metadata-0\" (UID: \"bb4b4833-604d-4011-a7b6-668f7116028b\") " pod="openstack/nova-metadata-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.291294 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4b4833-604d-4011-a7b6-668f7116028b-logs\") pod \"nova-metadata-0\" (UID: \"bb4b4833-604d-4011-a7b6-668f7116028b\") " pod="openstack/nova-metadata-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.291381 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4b4833-604d-4011-a7b6-668f7116028b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb4b4833-604d-4011-a7b6-668f7116028b\") " pod="openstack/nova-metadata-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.291516 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4b4833-604d-4011-a7b6-668f7116028b-config-data\") pod \"nova-metadata-0\" (UID: \"bb4b4833-604d-4011-a7b6-668f7116028b\") " pod="openstack/nova-metadata-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.292421 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4b4833-604d-4011-a7b6-668f7116028b-logs\") pod \"nova-metadata-0\" (UID: \"bb4b4833-604d-4011-a7b6-668f7116028b\") " pod="openstack/nova-metadata-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.292886 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.309373 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.321374 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.335773 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.339028 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.373440 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9n7g\" (UniqueName: \"kubernetes.io/projected/bb4b4833-604d-4011-a7b6-668f7116028b-kube-api-access-l9n7g\") pod \"nova-metadata-0\" (UID: \"bb4b4833-604d-4011-a7b6-668f7116028b\") " pod="openstack/nova-metadata-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.374212 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4b4833-604d-4011-a7b6-668f7116028b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb4b4833-604d-4011-a7b6-668f7116028b\") " pod="openstack/nova-metadata-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.381418 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4b4833-604d-4011-a7b6-668f7116028b-config-data\") pod \"nova-metadata-0\" (UID: \"bb4b4833-604d-4011-a7b6-668f7116028b\") " pod="openstack/nova-metadata-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.404557 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7w9d\" (UniqueName: \"kubernetes.io/projected/1a467f43-58ee-491c-8f01-fcef84e2e3ae-kube-api-access-f7w9d\") pod \"nova-scheduler-0\" (UID: \"1a467f43-58ee-491c-8f01-fcef84e2e3ae\") " pod="openstack/nova-scheduler-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.404946 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45712564-9cf2-4eb6-ad8f-96732e8823a4-logs\") pod \"nova-api-0\" (UID: \"45712564-9cf2-4eb6-ad8f-96732e8823a4\") " pod="openstack/nova-api-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.405030 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45712564-9cf2-4eb6-ad8f-96732e8823a4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45712564-9cf2-4eb6-ad8f-96732e8823a4\") " pod="openstack/nova-api-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.405160 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvm85\" (UniqueName: \"kubernetes.io/projected/45712564-9cf2-4eb6-ad8f-96732e8823a4-kube-api-access-wvm85\") pod \"nova-api-0\" (UID: \"45712564-9cf2-4eb6-ad8f-96732e8823a4\") " pod="openstack/nova-api-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.405232 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a467f43-58ee-491c-8f01-fcef84e2e3ae-config-data\") pod \"nova-scheduler-0\" (UID: \"1a467f43-58ee-491c-8f01-fcef84e2e3ae\") " pod="openstack/nova-scheduler-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.407025 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45712564-9cf2-4eb6-ad8f-96732e8823a4-config-data\") pod \"nova-api-0\" (UID: \"45712564-9cf2-4eb6-ad8f-96732e8823a4\") " pod="openstack/nova-api-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.407123 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a467f43-58ee-491c-8f01-fcef84e2e3ae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1a467f43-58ee-491c-8f01-fcef84e2e3ae\") " pod="openstack/nova-scheduler-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.426905 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.445205 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.457235 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.458562 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.494330 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.520384 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7w9d\" (UniqueName: \"kubernetes.io/projected/1a467f43-58ee-491c-8f01-fcef84e2e3ae-kube-api-access-f7w9d\") pod \"nova-scheduler-0\" (UID: \"1a467f43-58ee-491c-8f01-fcef84e2e3ae\") " pod="openstack/nova-scheduler-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.520462 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45712564-9cf2-4eb6-ad8f-96732e8823a4-logs\") pod \"nova-api-0\" (UID: \"45712564-9cf2-4eb6-ad8f-96732e8823a4\") " pod="openstack/nova-api-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.520499 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.520544 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45712564-9cf2-4eb6-ad8f-96732e8823a4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45712564-9cf2-4eb6-ad8f-96732e8823a4\") " pod="openstack/nova-api-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.520608 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvm85\" (UniqueName: \"kubernetes.io/projected/45712564-9cf2-4eb6-ad8f-96732e8823a4-kube-api-access-wvm85\") pod \"nova-api-0\" (UID: \"45712564-9cf2-4eb6-ad8f-96732e8823a4\") " pod="openstack/nova-api-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.520656 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr2vc\" (UniqueName: \"kubernetes.io/projected/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab-kube-api-access-kr2vc\") pod \"nova-cell1-novncproxy-0\" (UID: \"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.520702 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a467f43-58ee-491c-8f01-fcef84e2e3ae-config-data\") pod \"nova-scheduler-0\" (UID: \"1a467f43-58ee-491c-8f01-fcef84e2e3ae\") " pod="openstack/nova-scheduler-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.520722 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45712564-9cf2-4eb6-ad8f-96732e8823a4-config-data\") pod \"nova-api-0\" (UID: \"45712564-9cf2-4eb6-ad8f-96732e8823a4\") " pod="openstack/nova-api-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.520753 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a467f43-58ee-491c-8f01-fcef84e2e3ae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1a467f43-58ee-491c-8f01-fcef84e2e3ae\") " pod="openstack/nova-scheduler-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.520791 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.537938 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45712564-9cf2-4eb6-ad8f-96732e8823a4-logs\") pod \"nova-api-0\" (UID: \"45712564-9cf2-4eb6-ad8f-96732e8823a4\") " pod="openstack/nova-api-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.560299 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45712564-9cf2-4eb6-ad8f-96732e8823a4-config-data\") pod \"nova-api-0\" (UID: \"45712564-9cf2-4eb6-ad8f-96732e8823a4\") " pod="openstack/nova-api-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.562514 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a467f43-58ee-491c-8f01-fcef84e2e3ae-config-data\") pod \"nova-scheduler-0\" (UID: \"1a467f43-58ee-491c-8f01-fcef84e2e3ae\") " pod="openstack/nova-scheduler-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.563128 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45712564-9cf2-4eb6-ad8f-96732e8823a4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45712564-9cf2-4eb6-ad8f-96732e8823a4\") " pod="openstack/nova-api-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.567491 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a467f43-58ee-491c-8f01-fcef84e2e3ae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1a467f43-58ee-491c-8f01-fcef84e2e3ae\") " pod="openstack/nova-scheduler-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.580409 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.580855 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.599349 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvm85\" (UniqueName: \"kubernetes.io/projected/45712564-9cf2-4eb6-ad8f-96732e8823a4-kube-api-access-wvm85\") pod \"nova-api-0\" (UID: \"45712564-9cf2-4eb6-ad8f-96732e8823a4\") " pod="openstack/nova-api-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.608366 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7w9d\" (UniqueName: \"kubernetes.io/projected/1a467f43-58ee-491c-8f01-fcef84e2e3ae-kube-api-access-f7w9d\") pod \"nova-scheduler-0\" (UID: \"1a467f43-58ee-491c-8f01-fcef84e2e3ae\") " pod="openstack/nova-scheduler-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.623093 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.623182 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr2vc\" (UniqueName: \"kubernetes.io/projected/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab-kube-api-access-kr2vc\") pod \"nova-cell1-novncproxy-0\" (UID: \"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.623226 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.637787 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b7bb65955-qb2vf"] Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.639939 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.652013 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.653894 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b7bb65955-qb2vf"] Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.656028 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.667107 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr2vc\" (UniqueName: \"kubernetes.io/projected/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab-kube-api-access-kr2vc\") pod \"nova-cell1-novncproxy-0\" (UID: \"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.677555 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8rj4w"] Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.706558 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.725143 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-ovsdbserver-sb\") pod \"dnsmasq-dns-b7bb65955-qb2vf\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.725609 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-dns-svc\") pod \"dnsmasq-dns-b7bb65955-qb2vf\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.725734 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8444\" (UniqueName: \"kubernetes.io/projected/300a4df0-470a-4677-ba91-c9676e430849-kube-api-access-m8444\") pod \"dnsmasq-dns-b7bb65955-qb2vf\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.726128 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-dns-swift-storage-0\") pod \"dnsmasq-dns-b7bb65955-qb2vf\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.726220 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-config\") pod \"dnsmasq-dns-b7bb65955-qb2vf\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.726313 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-ovsdbserver-nb\") pod \"dnsmasq-dns-b7bb65955-qb2vf\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.725546 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.837679 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-dns-swift-storage-0\") pod \"dnsmasq-dns-b7bb65955-qb2vf\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.838028 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-config\") pod \"dnsmasq-dns-b7bb65955-qb2vf\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.838081 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-ovsdbserver-nb\") pod \"dnsmasq-dns-b7bb65955-qb2vf\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.838227 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-ovsdbserver-sb\") pod \"dnsmasq-dns-b7bb65955-qb2vf\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.838278 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-dns-svc\") pod \"dnsmasq-dns-b7bb65955-qb2vf\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.838316 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8444\" (UniqueName: \"kubernetes.io/projected/300a4df0-470a-4677-ba91-c9676e430849-kube-api-access-m8444\") pod \"dnsmasq-dns-b7bb65955-qb2vf\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.840074 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-ovsdbserver-nb\") pod \"dnsmasq-dns-b7bb65955-qb2vf\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.840773 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-dns-swift-storage-0\") pod \"dnsmasq-dns-b7bb65955-qb2vf\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.841018 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-ovsdbserver-sb\") pod \"dnsmasq-dns-b7bb65955-qb2vf\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.841697 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-config\") pod \"dnsmasq-dns-b7bb65955-qb2vf\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.841722 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-dns-svc\") pod \"dnsmasq-dns-b7bb65955-qb2vf\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.870257 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.870605 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8444\" (UniqueName: \"kubernetes.io/projected/300a4df0-470a-4677-ba91-c9676e430849-kube-api-access-m8444\") pod \"dnsmasq-dns-b7bb65955-qb2vf\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:19 crc kubenswrapper[4624]: I1008 14:44:19.910539 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8rj4w" event={"ID":"ae634848-a6be-4622-96b1-36c4f0427893","Type":"ContainerStarted","Data":"3a32808aff38ad3e8eea6b1a0ba332d8c67c80e31c43dd7deec119093afe6f86"} Oct 08 14:44:20 crc kubenswrapper[4624]: I1008 14:44:20.007401 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:20 crc kubenswrapper[4624]: I1008 14:44:20.370478 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 14:44:20 crc kubenswrapper[4624]: I1008 14:44:20.779728 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 14:44:20 crc kubenswrapper[4624]: I1008 14:44:20.818061 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 14:44:20 crc kubenswrapper[4624]: I1008 14:44:20.941844 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45712564-9cf2-4eb6-ad8f-96732e8823a4","Type":"ContainerStarted","Data":"75645983e6a8b34195ebc78a1d2193ab1f8fa1ed0e1051e33d9e9db039ae6a65"} Oct 08 14:44:20 crc kubenswrapper[4624]: I1008 14:44:20.948732 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8rj4w" event={"ID":"ae634848-a6be-4622-96b1-36c4f0427893","Type":"ContainerStarted","Data":"f2328f4147a97e2e03d95b2e678a127ef5b569676927a37b5563be8b91393291"} Oct 08 14:44:20 crc kubenswrapper[4624]: I1008 14:44:20.954771 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1a467f43-58ee-491c-8f01-fcef84e2e3ae","Type":"ContainerStarted","Data":"42919b81178245a15d7a61fc4594a0047d55d117773c0c1db1291d3d5df1428c"} Oct 08 14:44:20 crc kubenswrapper[4624]: I1008 14:44:20.964153 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb4b4833-604d-4011-a7b6-668f7116028b","Type":"ContainerStarted","Data":"0b46bd251841817b2205adc11c34a69273df0dde47b1cd834637d45d2d277085"} Oct 08 14:44:20 crc kubenswrapper[4624]: I1008 14:44:20.999145 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8rj4w" podStartSLOduration=2.999129232 podStartE2EDuration="2.999129232s" podCreationTimestamp="2025-10-08 14:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:44:20.97998062 +0000 UTC m=+1286.130915697" watchObservedRunningTime="2025-10-08 14:44:20.999129232 +0000 UTC m=+1286.150064309" Oct 08 14:44:21 crc kubenswrapper[4624]: I1008 14:44:21.001692 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b7bb65955-qb2vf"] Oct 08 14:44:21 crc kubenswrapper[4624]: I1008 14:44:21.107848 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 14:44:21 crc kubenswrapper[4624]: I1008 14:44:21.977253 4624 generic.go:334] "Generic (PLEG): container finished" podID="300a4df0-470a-4677-ba91-c9676e430849" containerID="27a2a9616f19d555481fe0cb4dc59c5f029a11373cf0dda125f275ce14e6effc" exitCode=0 Oct 08 14:44:21 crc kubenswrapper[4624]: I1008 14:44:21.977461 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" event={"ID":"300a4df0-470a-4677-ba91-c9676e430849","Type":"ContainerDied","Data":"27a2a9616f19d555481fe0cb4dc59c5f029a11373cf0dda125f275ce14e6effc"} Oct 08 14:44:21 crc kubenswrapper[4624]: I1008 14:44:21.977620 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" event={"ID":"300a4df0-470a-4677-ba91-c9676e430849","Type":"ContainerStarted","Data":"ba67f3371f534420fa3aaaf9d914c15cd91b54ec86096f45987a06566f046934"} Oct 08 14:44:21 crc kubenswrapper[4624]: I1008 14:44:21.981363 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab","Type":"ContainerStarted","Data":"fc86d8cda0d3f338064b26978077162d6e49205fc54d24d6b219bef215e555d4"} Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.016510 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t62tf"] Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.017820 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t62tf" Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.041315 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.041962 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.059479 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t62tf"] Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.102604 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggz5h\" (UniqueName: \"kubernetes.io/projected/5e779391-b6f0-47f2-b670-0267f571c9ff-kube-api-access-ggz5h\") pod \"nova-cell1-conductor-db-sync-t62tf\" (UID: \"5e779391-b6f0-47f2-b670-0267f571c9ff\") " pod="openstack/nova-cell1-conductor-db-sync-t62tf" Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.102764 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e779391-b6f0-47f2-b670-0267f571c9ff-config-data\") pod \"nova-cell1-conductor-db-sync-t62tf\" (UID: \"5e779391-b6f0-47f2-b670-0267f571c9ff\") " pod="openstack/nova-cell1-conductor-db-sync-t62tf" Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.102997 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e779391-b6f0-47f2-b670-0267f571c9ff-scripts\") pod \"nova-cell1-conductor-db-sync-t62tf\" (UID: \"5e779391-b6f0-47f2-b670-0267f571c9ff\") " pod="openstack/nova-cell1-conductor-db-sync-t62tf" Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.103343 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e779391-b6f0-47f2-b670-0267f571c9ff-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t62tf\" (UID: \"5e779391-b6f0-47f2-b670-0267f571c9ff\") " pod="openstack/nova-cell1-conductor-db-sync-t62tf" Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.204835 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e779391-b6f0-47f2-b670-0267f571c9ff-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t62tf\" (UID: \"5e779391-b6f0-47f2-b670-0267f571c9ff\") " pod="openstack/nova-cell1-conductor-db-sync-t62tf" Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.204914 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggz5h\" (UniqueName: \"kubernetes.io/projected/5e779391-b6f0-47f2-b670-0267f571c9ff-kube-api-access-ggz5h\") pod \"nova-cell1-conductor-db-sync-t62tf\" (UID: \"5e779391-b6f0-47f2-b670-0267f571c9ff\") " pod="openstack/nova-cell1-conductor-db-sync-t62tf" Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.204961 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e779391-b6f0-47f2-b670-0267f571c9ff-config-data\") pod \"nova-cell1-conductor-db-sync-t62tf\" (UID: \"5e779391-b6f0-47f2-b670-0267f571c9ff\") " pod="openstack/nova-cell1-conductor-db-sync-t62tf" Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.205049 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e779391-b6f0-47f2-b670-0267f571c9ff-scripts\") pod \"nova-cell1-conductor-db-sync-t62tf\" (UID: \"5e779391-b6f0-47f2-b670-0267f571c9ff\") " pod="openstack/nova-cell1-conductor-db-sync-t62tf" Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.230473 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggz5h\" (UniqueName: \"kubernetes.io/projected/5e779391-b6f0-47f2-b670-0267f571c9ff-kube-api-access-ggz5h\") pod \"nova-cell1-conductor-db-sync-t62tf\" (UID: \"5e779391-b6f0-47f2-b670-0267f571c9ff\") " pod="openstack/nova-cell1-conductor-db-sync-t62tf" Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.236198 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e779391-b6f0-47f2-b670-0267f571c9ff-config-data\") pod \"nova-cell1-conductor-db-sync-t62tf\" (UID: \"5e779391-b6f0-47f2-b670-0267f571c9ff\") " pod="openstack/nova-cell1-conductor-db-sync-t62tf" Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.236574 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e779391-b6f0-47f2-b670-0267f571c9ff-scripts\") pod \"nova-cell1-conductor-db-sync-t62tf\" (UID: \"5e779391-b6f0-47f2-b670-0267f571c9ff\") " pod="openstack/nova-cell1-conductor-db-sync-t62tf" Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.237411 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e779391-b6f0-47f2-b670-0267f571c9ff-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t62tf\" (UID: \"5e779391-b6f0-47f2-b670-0267f571c9ff\") " pod="openstack/nova-cell1-conductor-db-sync-t62tf" Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.265373 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t62tf" Oct 08 14:44:22 crc kubenswrapper[4624]: I1008 14:44:22.982214 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t62tf"] Oct 08 14:44:23 crc kubenswrapper[4624]: I1008 14:44:23.013736 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" event={"ID":"300a4df0-470a-4677-ba91-c9676e430849","Type":"ContainerStarted","Data":"e68036154b1488ba6487d9deb2209f8c5693b4a6080fcccfb37c64b5ac8a408c"} Oct 08 14:44:23 crc kubenswrapper[4624]: I1008 14:44:23.013834 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:23 crc kubenswrapper[4624]: I1008 14:44:23.053277 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" podStartSLOduration=4.053260098 podStartE2EDuration="4.053260098s" podCreationTimestamp="2025-10-08 14:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:44:23.036705131 +0000 UTC m=+1288.187640208" watchObservedRunningTime="2025-10-08 14:44:23.053260098 +0000 UTC m=+1288.204195175" Oct 08 14:44:24 crc kubenswrapper[4624]: I1008 14:44:24.020593 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t62tf" event={"ID":"5e779391-b6f0-47f2-b670-0267f571c9ff","Type":"ContainerStarted","Data":"8f6ad6979918e11ca1e86ecaa882272e4f3f46d238baa7f7527d905bbe21c2fb"} Oct 08 14:44:24 crc kubenswrapper[4624]: I1008 14:44:24.215904 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 14:44:24 crc kubenswrapper[4624]: I1008 14:44:24.272217 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 14:44:27 crc kubenswrapper[4624]: I1008 14:44:27.070908 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45712564-9cf2-4eb6-ad8f-96732e8823a4","Type":"ContainerStarted","Data":"0f1c4d28c0bf3b83fa0b46f7f08582d92bd4b49d18f9016df2b601fd6e56d856"} Oct 08 14:44:27 crc kubenswrapper[4624]: I1008 14:44:27.071237 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45712564-9cf2-4eb6-ad8f-96732e8823a4","Type":"ContainerStarted","Data":"7918a58a91ddde7ea36488e755cc3e8973664c8c63ef024f04f93535a6d2c1db"} Oct 08 14:44:27 crc kubenswrapper[4624]: I1008 14:44:27.073436 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t62tf" event={"ID":"5e779391-b6f0-47f2-b670-0267f571c9ff","Type":"ContainerStarted","Data":"b4a70bc451a76630b0c6fe0461307069449dc7dcd8f6128bb1b83fc8dfa501b7"} Oct 08 14:44:27 crc kubenswrapper[4624]: I1008 14:44:27.078748 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1a467f43-58ee-491c-8f01-fcef84e2e3ae","Type":"ContainerStarted","Data":"a5a6b9f9f632948acfbd0b6413783628e5a3d02be1de2c45fb639e394400807f"} Oct 08 14:44:27 crc kubenswrapper[4624]: I1008 14:44:27.082205 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb4b4833-604d-4011-a7b6-668f7116028b","Type":"ContainerStarted","Data":"6d2f4c7d2b8a32f9f1833ccfbdb7a4303ff22a0b460ddd02f96b52971474f918"} Oct 08 14:44:27 crc kubenswrapper[4624]: I1008 14:44:27.082237 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb4b4833-604d-4011-a7b6-668f7116028b","Type":"ContainerStarted","Data":"3943b4f502bc853825d22458db108b1df4c28ac6a45bfc875922595e263d248b"} Oct 08 14:44:27 crc kubenswrapper[4624]: I1008 14:44:27.082338 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb4b4833-604d-4011-a7b6-668f7116028b" containerName="nova-metadata-log" containerID="cri-o://3943b4f502bc853825d22458db108b1df4c28ac6a45bfc875922595e263d248b" gracePeriod=30 Oct 08 14:44:27 crc kubenswrapper[4624]: I1008 14:44:27.082620 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb4b4833-604d-4011-a7b6-668f7116028b" containerName="nova-metadata-metadata" containerID="cri-o://6d2f4c7d2b8a32f9f1833ccfbdb7a4303ff22a0b460ddd02f96b52971474f918" gracePeriod=30 Oct 08 14:44:27 crc kubenswrapper[4624]: I1008 14:44:27.090058 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab","Type":"ContainerStarted","Data":"7d67088a8e70e39025ec2fcd5781ed070057561f93ba99d763c917c19f199f39"} Oct 08 14:44:27 crc kubenswrapper[4624]: I1008 14:44:27.090195 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="5217c8fb-0a56-4f45-8c0e-5465edc9e9ab" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://7d67088a8e70e39025ec2fcd5781ed070057561f93ba99d763c917c19f199f39" gracePeriod=30 Oct 08 14:44:27 crc kubenswrapper[4624]: I1008 14:44:27.122883 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.655779564 podStartE2EDuration="8.122859805s" podCreationTimestamp="2025-10-08 14:44:19 +0000 UTC" firstStartedPulling="2025-10-08 14:44:20.408458184 +0000 UTC m=+1285.559393261" lastFinishedPulling="2025-10-08 14:44:25.875538425 +0000 UTC m=+1291.026473502" observedRunningTime="2025-10-08 14:44:27.118971677 +0000 UTC m=+1292.269906764" watchObservedRunningTime="2025-10-08 14:44:27.122859805 +0000 UTC m=+1292.273794902" Oct 08 14:44:27 crc kubenswrapper[4624]: I1008 14:44:27.123723 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.108012693 podStartE2EDuration="8.123713927s" podCreationTimestamp="2025-10-08 14:44:19 +0000 UTC" firstStartedPulling="2025-10-08 14:44:20.858963869 +0000 UTC m=+1286.009898946" lastFinishedPulling="2025-10-08 14:44:25.874665103 +0000 UTC m=+1291.025600180" observedRunningTime="2025-10-08 14:44:27.099966968 +0000 UTC m=+1292.250902045" watchObservedRunningTime="2025-10-08 14:44:27.123713927 +0000 UTC m=+1292.274648994" Oct 08 14:44:27 crc kubenswrapper[4624]: I1008 14:44:27.145568 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-t62tf" podStartSLOduration=6.145548667 podStartE2EDuration="6.145548667s" podCreationTimestamp="2025-10-08 14:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:44:27.135992546 +0000 UTC m=+1292.286927623" watchObservedRunningTime="2025-10-08 14:44:27.145548667 +0000 UTC m=+1292.296483744" Oct 08 14:44:27 crc kubenswrapper[4624]: I1008 14:44:27.167507 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.168724843 podStartE2EDuration="8.16749127s" podCreationTimestamp="2025-10-08 14:44:19 +0000 UTC" firstStartedPulling="2025-10-08 14:44:20.875851965 +0000 UTC m=+1286.026787042" lastFinishedPulling="2025-10-08 14:44:25.874618392 +0000 UTC m=+1291.025553469" observedRunningTime="2025-10-08 14:44:27.163957671 +0000 UTC m=+1292.314892748" watchObservedRunningTime="2025-10-08 14:44:27.16749127 +0000 UTC m=+1292.318426347" Oct 08 14:44:27 crc kubenswrapper[4624]: I1008 14:44:27.195862 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.45068943 podStartE2EDuration="8.195840215s" podCreationTimestamp="2025-10-08 14:44:19 +0000 UTC" firstStartedPulling="2025-10-08 14:44:21.130572805 +0000 UTC m=+1286.281507882" lastFinishedPulling="2025-10-08 14:44:25.87572359 +0000 UTC m=+1291.026658667" observedRunningTime="2025-10-08 14:44:27.194832539 +0000 UTC m=+1292.345767616" watchObservedRunningTime="2025-10-08 14:44:27.195840215 +0000 UTC m=+1292.346775292" Oct 08 14:44:28 crc kubenswrapper[4624]: I1008 14:44:28.107256 4624 generic.go:334] "Generic (PLEG): container finished" podID="bb4b4833-604d-4011-a7b6-668f7116028b" containerID="3943b4f502bc853825d22458db108b1df4c28ac6a45bfc875922595e263d248b" exitCode=143 Oct 08 14:44:28 crc kubenswrapper[4624]: I1008 14:44:28.107396 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb4b4833-604d-4011-a7b6-668f7116028b","Type":"ContainerDied","Data":"3943b4f502bc853825d22458db108b1df4c28ac6a45bfc875922595e263d248b"} Oct 08 14:44:28 crc kubenswrapper[4624]: I1008 14:44:28.128040 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:44:28 crc kubenswrapper[4624]: I1008 14:44:28.263894 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.164378 4624 generic.go:334] "Generic (PLEG): container finished" podID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerID="b358a498261f5b79dc5d98dd1785ca7e90933b57d8f0e195276f5f3f1867a56c" exitCode=0 Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.164418 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc13ba3-3410-4f96-be55-0424e8e74e2c","Type":"ContainerDied","Data":"b358a498261f5b79dc5d98dd1785ca7e90933b57d8f0e195276f5f3f1867a56c"} Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.545144 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.582737 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.582805 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.648600 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-config-data\") pod \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.648695 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-scripts\") pod \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.648788 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg89s\" (UniqueName: \"kubernetes.io/projected/bbc13ba3-3410-4f96-be55-0424e8e74e2c-kube-api-access-vg89s\") pod \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.648824 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-sg-core-conf-yaml\") pod \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.648897 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-combined-ca-bundle\") pod \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.648925 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc13ba3-3410-4f96-be55-0424e8e74e2c-run-httpd\") pod \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.649060 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc13ba3-3410-4f96-be55-0424e8e74e2c-log-httpd\") pod \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.649160 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-ceilometer-tls-certs\") pod \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\" (UID: \"bbc13ba3-3410-4f96-be55-0424e8e74e2c\") " Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.649716 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbc13ba3-3410-4f96-be55-0424e8e74e2c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bbc13ba3-3410-4f96-be55-0424e8e74e2c" (UID: "bbc13ba3-3410-4f96-be55-0424e8e74e2c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.650188 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbc13ba3-3410-4f96-be55-0424e8e74e2c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bbc13ba3-3410-4f96-be55-0424e8e74e2c" (UID: "bbc13ba3-3410-4f96-be55-0424e8e74e2c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.653921 4624 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc13ba3-3410-4f96-be55-0424e8e74e2c-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.654875 4624 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbc13ba3-3410-4f96-be55-0424e8e74e2c-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.666872 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbc13ba3-3410-4f96-be55-0424e8e74e2c-kube-api-access-vg89s" (OuterVolumeSpecName: "kube-api-access-vg89s") pod "bbc13ba3-3410-4f96-be55-0424e8e74e2c" (UID: "bbc13ba3-3410-4f96-be55-0424e8e74e2c"). InnerVolumeSpecName "kube-api-access-vg89s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.688816 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-scripts" (OuterVolumeSpecName: "scripts") pod "bbc13ba3-3410-4f96-be55-0424e8e74e2c" (UID: "bbc13ba3-3410-4f96-be55-0424e8e74e2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.709196 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.709237 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.728983 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.730055 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.756029 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg89s\" (UniqueName: \"kubernetes.io/projected/bbc13ba3-3410-4f96-be55-0424e8e74e2c-kube-api-access-vg89s\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.756069 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.768853 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bbc13ba3-3410-4f96-be55-0424e8e74e2c" (UID: "bbc13ba3-3410-4f96-be55-0424e8e74e2c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.784954 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.790786 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bbc13ba3-3410-4f96-be55-0424e8e74e2c" (UID: "bbc13ba3-3410-4f96-be55-0424e8e74e2c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.858098 4624 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.858339 4624 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.871791 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbc13ba3-3410-4f96-be55-0424e8e74e2c" (UID: "bbc13ba3-3410-4f96-be55-0424e8e74e2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.871829 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.875806 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-config-data" (OuterVolumeSpecName: "config-data") pod "bbc13ba3-3410-4f96-be55-0424e8e74e2c" (UID: "bbc13ba3-3410-4f96-be55-0424e8e74e2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.960416 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:29 crc kubenswrapper[4624]: I1008 14:44:29.960812 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc13ba3-3410-4f96-be55-0424e8e74e2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.008821 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.131570 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545dc78c-24gzk"] Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.132060 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-545dc78c-24gzk" podUID="47b6d75e-5f52-46b7-ad81-0efc8ae08807" containerName="dnsmasq-dns" containerID="cri-o://808dc982a8094fb2eaf3059e7f6204fb3f2d70021c7a1d6e341e12fa6d10d77c" gracePeriod=10 Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.195539 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbc13ba3-3410-4f96-be55-0424e8e74e2c","Type":"ContainerDied","Data":"5a40941d04525fce2c7350bf1b635d6110f13ae4260b8e987e3b163aece80899"} Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.195602 4624 scope.go:117] "RemoveContainer" containerID="282237c63eebff1d3e2db7b8eacbd58af6c82f4f290fa612f2f6e3a40a70de5d" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.196242 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.286414 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.313234 4624 scope.go:117] "RemoveContainer" containerID="eff2510520bb95847384264d2f520cf7372a12c7a0de764a7508a6ccdffc09b5" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.392841 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.399402 4624 scope.go:117] "RemoveContainer" containerID="24f6fb667eb68c82e78b3977f15a548cae94696fabe85307b40b8a42b3bc08de" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.428734 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.458171 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:44:30 crc kubenswrapper[4624]: E1008 14:44:30.458627 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerName="sg-core" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.458682 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerName="sg-core" Oct 08 14:44:30 crc kubenswrapper[4624]: E1008 14:44:30.458721 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerName="ceilometer-notification-agent" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.458728 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerName="ceilometer-notification-agent" Oct 08 14:44:30 crc kubenswrapper[4624]: E1008 14:44:30.458747 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerName="proxy-httpd" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.458752 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerName="proxy-httpd" Oct 08 14:44:30 crc kubenswrapper[4624]: E1008 14:44:30.458767 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerName="ceilometer-central-agent" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.458773 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerName="ceilometer-central-agent" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.458979 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerName="ceilometer-central-agent" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.458994 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerName="sg-core" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.459006 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerName="proxy-httpd" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.459016 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" containerName="ceilometer-notification-agent" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.460818 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.469099 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.469898 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.474943 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.489678 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.516868 4624 scope.go:117] "RemoveContainer" containerID="b358a498261f5b79dc5d98dd1785ca7e90933b57d8f0e195276f5f3f1867a56c" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.586579 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mpnj\" (UniqueName: \"kubernetes.io/projected/9484ff6c-2ae2-437a-825c-dc4253039982-kube-api-access-4mpnj\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.586976 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-scripts\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.587007 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.587090 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.587129 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9484ff6c-2ae2-437a-825c-dc4253039982-log-httpd\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.587169 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.587219 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9484ff6c-2ae2-437a-825c-dc4253039982-run-httpd\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.587287 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-config-data\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.691797 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mpnj\" (UniqueName: \"kubernetes.io/projected/9484ff6c-2ae2-437a-825c-dc4253039982-kube-api-access-4mpnj\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.691844 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-scripts\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.691864 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.691891 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.691913 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9484ff6c-2ae2-437a-825c-dc4253039982-log-httpd\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.691941 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.691966 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9484ff6c-2ae2-437a-825c-dc4253039982-run-httpd\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.692002 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-config-data\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.693040 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9484ff6c-2ae2-437a-825c-dc4253039982-log-httpd\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.693353 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9484ff6c-2ae2-437a-825c-dc4253039982-run-httpd\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.706908 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-config-data\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.726289 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-scripts\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.736737 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.740263 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.761295 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.790572 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mpnj\" (UniqueName: \"kubernetes.io/projected/9484ff6c-2ae2-437a-825c-dc4253039982-kube-api-access-4mpnj\") pod \"ceilometer-0\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " pod="openstack/ceilometer-0" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.796063 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="45712564-9cf2-4eb6-ad8f-96732e8823a4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.796818 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="45712564-9cf2-4eb6-ad8f-96732e8823a4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 14:44:30 crc kubenswrapper[4624]: I1008 14:44:30.815736 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.063477 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.211823 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqt2w\" (UniqueName: \"kubernetes.io/projected/47b6d75e-5f52-46b7-ad81-0efc8ae08807-kube-api-access-fqt2w\") pod \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.211951 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-config\") pod \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.212019 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-ovsdbserver-sb\") pod \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.212062 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-dns-swift-storage-0\") pod \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.212103 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-dns-svc\") pod \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.212145 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-ovsdbserver-nb\") pod \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\" (UID: \"47b6d75e-5f52-46b7-ad81-0efc8ae08807\") " Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.235167 4624 generic.go:334] "Generic (PLEG): container finished" podID="47b6d75e-5f52-46b7-ad81-0efc8ae08807" containerID="808dc982a8094fb2eaf3059e7f6204fb3f2d70021c7a1d6e341e12fa6d10d77c" exitCode=0 Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.235253 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545dc78c-24gzk" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.235276 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545dc78c-24gzk" event={"ID":"47b6d75e-5f52-46b7-ad81-0efc8ae08807","Type":"ContainerDied","Data":"808dc982a8094fb2eaf3059e7f6204fb3f2d70021c7a1d6e341e12fa6d10d77c"} Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.235906 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545dc78c-24gzk" event={"ID":"47b6d75e-5f52-46b7-ad81-0efc8ae08807","Type":"ContainerDied","Data":"728ecab953dc8e1975eb909e2666e8409694453c0455c030f52dff45bf2cb060"} Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.235926 4624 scope.go:117] "RemoveContainer" containerID="808dc982a8094fb2eaf3059e7f6204fb3f2d70021c7a1d6e341e12fa6d10d77c" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.252325 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b6d75e-5f52-46b7-ad81-0efc8ae08807-kube-api-access-fqt2w" (OuterVolumeSpecName: "kube-api-access-fqt2w") pod "47b6d75e-5f52-46b7-ad81-0efc8ae08807" (UID: "47b6d75e-5f52-46b7-ad81-0efc8ae08807"). InnerVolumeSpecName "kube-api-access-fqt2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.296952 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "47b6d75e-5f52-46b7-ad81-0efc8ae08807" (UID: "47b6d75e-5f52-46b7-ad81-0efc8ae08807"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.320072 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.320101 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqt2w\" (UniqueName: \"kubernetes.io/projected/47b6d75e-5f52-46b7-ad81-0efc8ae08807-kube-api-access-fqt2w\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.333531 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "47b6d75e-5f52-46b7-ad81-0efc8ae08807" (UID: "47b6d75e-5f52-46b7-ad81-0efc8ae08807"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.339345 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "47b6d75e-5f52-46b7-ad81-0efc8ae08807" (UID: "47b6d75e-5f52-46b7-ad81-0efc8ae08807"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.368464 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-67f45f8444-g8bbs" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.390936 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-config" (OuterVolumeSpecName: "config") pod "47b6d75e-5f52-46b7-ad81-0efc8ae08807" (UID: "47b6d75e-5f52-46b7-ad81-0efc8ae08807"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.394223 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "47b6d75e-5f52-46b7-ad81-0efc8ae08807" (UID: "47b6d75e-5f52-46b7-ad81-0efc8ae08807"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.413814 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.421623 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.421669 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.421681 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.421692 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47b6d75e-5f52-46b7-ad81-0efc8ae08807-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.501535 4624 scope.go:117] "RemoveContainer" containerID="3770c773277619224ad2fb196cebc40641a6445d859832e277661ddd7c03b540" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.513627 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbc13ba3-3410-4f96-be55-0424e8e74e2c" path="/var/lib/kubelet/pods/bbc13ba3-3410-4f96-be55-0424e8e74e2c/volumes" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.514856 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f6cd65c74-7vqb5"] Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.543373 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.573418 4624 scope.go:117] "RemoveContainer" containerID="808dc982a8094fb2eaf3059e7f6204fb3f2d70021c7a1d6e341e12fa6d10d77c" Oct 08 14:44:31 crc kubenswrapper[4624]: E1008 14:44:31.577891 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"808dc982a8094fb2eaf3059e7f6204fb3f2d70021c7a1d6e341e12fa6d10d77c\": container with ID starting with 808dc982a8094fb2eaf3059e7f6204fb3f2d70021c7a1d6e341e12fa6d10d77c not found: ID does not exist" containerID="808dc982a8094fb2eaf3059e7f6204fb3f2d70021c7a1d6e341e12fa6d10d77c" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.577946 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"808dc982a8094fb2eaf3059e7f6204fb3f2d70021c7a1d6e341e12fa6d10d77c"} err="failed to get container status \"808dc982a8094fb2eaf3059e7f6204fb3f2d70021c7a1d6e341e12fa6d10d77c\": rpc error: code = NotFound desc = could not find container \"808dc982a8094fb2eaf3059e7f6204fb3f2d70021c7a1d6e341e12fa6d10d77c\": container with ID starting with 808dc982a8094fb2eaf3059e7f6204fb3f2d70021c7a1d6e341e12fa6d10d77c not found: ID does not exist" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.577982 4624 scope.go:117] "RemoveContainer" containerID="3770c773277619224ad2fb196cebc40641a6445d859832e277661ddd7c03b540" Oct 08 14:44:31 crc kubenswrapper[4624]: E1008 14:44:31.579146 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3770c773277619224ad2fb196cebc40641a6445d859832e277661ddd7c03b540\": container with ID starting with 3770c773277619224ad2fb196cebc40641a6445d859832e277661ddd7c03b540 not found: ID does not exist" containerID="3770c773277619224ad2fb196cebc40641a6445d859832e277661ddd7c03b540" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.579211 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3770c773277619224ad2fb196cebc40641a6445d859832e277661ddd7c03b540"} err="failed to get container status \"3770c773277619224ad2fb196cebc40641a6445d859832e277661ddd7c03b540\": rpc error: code = NotFound desc = could not find container \"3770c773277619224ad2fb196cebc40641a6445d859832e277661ddd7c03b540\": container with ID starting with 3770c773277619224ad2fb196cebc40641a6445d859832e277661ddd7c03b540 not found: ID does not exist" Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.634524 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545dc78c-24gzk"] Oct 08 14:44:31 crc kubenswrapper[4624]: I1008 14:44:31.645718 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-545dc78c-24gzk"] Oct 08 14:44:32 crc kubenswrapper[4624]: I1008 14:44:32.280921 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9484ff6c-2ae2-437a-825c-dc4253039982","Type":"ContainerStarted","Data":"a0b94ce9d0a50bd01558ab4183b028d4dffb86507f4c9933c73c9fa22e6f4011"} Oct 08 14:44:32 crc kubenswrapper[4624]: I1008 14:44:32.282158 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9484ff6c-2ae2-437a-825c-dc4253039982","Type":"ContainerStarted","Data":"4e8b7cddc68ee3fca2d7d0e7e51bd54d7f54a0d99ad318639514d9bff53eb263"} Oct 08 14:44:32 crc kubenswrapper[4624]: I1008 14:44:32.282260 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9484ff6c-2ae2-437a-825c-dc4253039982","Type":"ContainerStarted","Data":"14d4a0efbc96a958ac23d415afd01c9b7eea35e2d945a4d82fcb215b335d8ccb"} Oct 08 14:44:32 crc kubenswrapper[4624]: I1008 14:44:32.284314 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f6cd65c74-7vqb5" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon-log" containerID="cri-o://da719f698a783f03a9aeca3f8634dbc83f874d6792863ed431947ce50e36881e" gracePeriod=30 Oct 08 14:44:32 crc kubenswrapper[4624]: I1008 14:44:32.284469 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f6cd65c74-7vqb5" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" containerID="cri-o://d14385dae5a640de6514fa96bdea43d5442497114f44ba80cb5f671931df8294" gracePeriod=30 Oct 08 14:44:33 crc kubenswrapper[4624]: I1008 14:44:33.296721 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9484ff6c-2ae2-437a-825c-dc4253039982","Type":"ContainerStarted","Data":"2495fd34b0213170b89e241d486d40e77ae669909a8993ffbdcff6b5e06cd0af"} Oct 08 14:44:33 crc kubenswrapper[4624]: I1008 14:44:33.477473 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47b6d75e-5f52-46b7-ad81-0efc8ae08807" path="/var/lib/kubelet/pods/47b6d75e-5f52-46b7-ad81-0efc8ae08807/volumes" Oct 08 14:44:34 crc kubenswrapper[4624]: I1008 14:44:34.311488 4624 generic.go:334] "Generic (PLEG): container finished" podID="ae634848-a6be-4622-96b1-36c4f0427893" containerID="f2328f4147a97e2e03d95b2e678a127ef5b569676927a37b5563be8b91393291" exitCode=0 Oct 08 14:44:34 crc kubenswrapper[4624]: I1008 14:44:34.311675 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8rj4w" event={"ID":"ae634848-a6be-4622-96b1-36c4f0427893","Type":"ContainerDied","Data":"f2328f4147a97e2e03d95b2e678a127ef5b569676927a37b5563be8b91393291"} Oct 08 14:44:35 crc kubenswrapper[4624]: I1008 14:44:35.327144 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9484ff6c-2ae2-437a-825c-dc4253039982","Type":"ContainerStarted","Data":"3f4971fb053c838a1ca055851ee20abb1b80dd17076e1ebfb366fa4d21473cf6"} Oct 08 14:44:35 crc kubenswrapper[4624]: I1008 14:44:35.327608 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 14:44:35 crc kubenswrapper[4624]: I1008 14:44:35.362442 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.98817736 podStartE2EDuration="5.36242225s" podCreationTimestamp="2025-10-08 14:44:30 +0000 UTC" firstStartedPulling="2025-10-08 14:44:31.594983649 +0000 UTC m=+1296.745918726" lastFinishedPulling="2025-10-08 14:44:34.969228529 +0000 UTC m=+1300.120163616" observedRunningTime="2025-10-08 14:44:35.358196424 +0000 UTC m=+1300.509131501" watchObservedRunningTime="2025-10-08 14:44:35.36242225 +0000 UTC m=+1300.513357327" Oct 08 14:44:35 crc kubenswrapper[4624]: I1008 14:44:35.452324 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f6cd65c74-7vqb5" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:44666->10.217.0.149:8443: read: connection reset by peer" Oct 08 14:44:35 crc kubenswrapper[4624]: I1008 14:44:35.871898 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8rj4w" Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.033346 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae634848-a6be-4622-96b1-36c4f0427893-config-data\") pod \"ae634848-a6be-4622-96b1-36c4f0427893\" (UID: \"ae634848-a6be-4622-96b1-36c4f0427893\") " Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.033537 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsfl9\" (UniqueName: \"kubernetes.io/projected/ae634848-a6be-4622-96b1-36c4f0427893-kube-api-access-hsfl9\") pod \"ae634848-a6be-4622-96b1-36c4f0427893\" (UID: \"ae634848-a6be-4622-96b1-36c4f0427893\") " Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.033562 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae634848-a6be-4622-96b1-36c4f0427893-scripts\") pod \"ae634848-a6be-4622-96b1-36c4f0427893\" (UID: \"ae634848-a6be-4622-96b1-36c4f0427893\") " Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.033686 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae634848-a6be-4622-96b1-36c4f0427893-combined-ca-bundle\") pod \"ae634848-a6be-4622-96b1-36c4f0427893\" (UID: \"ae634848-a6be-4622-96b1-36c4f0427893\") " Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.043823 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae634848-a6be-4622-96b1-36c4f0427893-scripts" (OuterVolumeSpecName: "scripts") pod "ae634848-a6be-4622-96b1-36c4f0427893" (UID: "ae634848-a6be-4622-96b1-36c4f0427893"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.043852 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae634848-a6be-4622-96b1-36c4f0427893-kube-api-access-hsfl9" (OuterVolumeSpecName: "kube-api-access-hsfl9") pod "ae634848-a6be-4622-96b1-36c4f0427893" (UID: "ae634848-a6be-4622-96b1-36c4f0427893"). InnerVolumeSpecName "kube-api-access-hsfl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.067343 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae634848-a6be-4622-96b1-36c4f0427893-config-data" (OuterVolumeSpecName: "config-data") pod "ae634848-a6be-4622-96b1-36c4f0427893" (UID: "ae634848-a6be-4622-96b1-36c4f0427893"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.082424 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae634848-a6be-4622-96b1-36c4f0427893-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae634848-a6be-4622-96b1-36c4f0427893" (UID: "ae634848-a6be-4622-96b1-36c4f0427893"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.135460 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsfl9\" (UniqueName: \"kubernetes.io/projected/ae634848-a6be-4622-96b1-36c4f0427893-kube-api-access-hsfl9\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.135839 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae634848-a6be-4622-96b1-36c4f0427893-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.135849 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae634848-a6be-4622-96b1-36c4f0427893-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.135859 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae634848-a6be-4622-96b1-36c4f0427893-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.337612 4624 generic.go:334] "Generic (PLEG): container finished" podID="4b203558-1aea-4672-871f-d2dca324a585" containerID="d14385dae5a640de6514fa96bdea43d5442497114f44ba80cb5f671931df8294" exitCode=0 Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.337686 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6cd65c74-7vqb5" event={"ID":"4b203558-1aea-4672-871f-d2dca324a585","Type":"ContainerDied","Data":"d14385dae5a640de6514fa96bdea43d5442497114f44ba80cb5f671931df8294"} Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.337727 4624 scope.go:117] "RemoveContainer" containerID="2adb37d7e1e3ae4257d4557e08b6dab44a21981eeeec3209e1f7922febff5e54" Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.344357 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8rj4w" Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.349774 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8rj4w" event={"ID":"ae634848-a6be-4622-96b1-36c4f0427893","Type":"ContainerDied","Data":"3a32808aff38ad3e8eea6b1a0ba332d8c67c80e31c43dd7deec119093afe6f86"} Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.349811 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a32808aff38ad3e8eea6b1a0ba332d8c67c80e31c43dd7deec119093afe6f86" Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.519251 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.519909 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="45712564-9cf2-4eb6-ad8f-96732e8823a4" containerName="nova-api-log" containerID="cri-o://7918a58a91ddde7ea36488e755cc3e8973664c8c63ef024f04f93535a6d2c1db" gracePeriod=30 Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.520512 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="45712564-9cf2-4eb6-ad8f-96732e8823a4" containerName="nova-api-api" containerID="cri-o://0f1c4d28c0bf3b83fa0b46f7f08582d92bd4b49d18f9016df2b601fd6e56d856" gracePeriod=30 Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.565777 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 14:44:36 crc kubenswrapper[4624]: I1008 14:44:36.566003 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1a467f43-58ee-491c-8f01-fcef84e2e3ae" containerName="nova-scheduler-scheduler" containerID="cri-o://a5a6b9f9f632948acfbd0b6413783628e5a3d02be1de2c45fb639e394400807f" gracePeriod=30 Oct 08 14:44:37 crc kubenswrapper[4624]: I1008 14:44:37.353940 4624 generic.go:334] "Generic (PLEG): container finished" podID="45712564-9cf2-4eb6-ad8f-96732e8823a4" containerID="7918a58a91ddde7ea36488e755cc3e8973664c8c63ef024f04f93535a6d2c1db" exitCode=143 Oct 08 14:44:37 crc kubenswrapper[4624]: I1008 14:44:37.353995 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45712564-9cf2-4eb6-ad8f-96732e8823a4","Type":"ContainerDied","Data":"7918a58a91ddde7ea36488e755cc3e8973664c8c63ef024f04f93535a6d2c1db"} Oct 08 14:44:38 crc kubenswrapper[4624]: E1008 14:44:38.097296 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Oct 08 14:44:39 crc kubenswrapper[4624]: I1008 14:44:39.375674 4624 generic.go:334] "Generic (PLEG): container finished" podID="1a467f43-58ee-491c-8f01-fcef84e2e3ae" containerID="a5a6b9f9f632948acfbd0b6413783628e5a3d02be1de2c45fb639e394400807f" exitCode=0 Oct 08 14:44:39 crc kubenswrapper[4624]: I1008 14:44:39.375855 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1a467f43-58ee-491c-8f01-fcef84e2e3ae","Type":"ContainerDied","Data":"a5a6b9f9f632948acfbd0b6413783628e5a3d02be1de2c45fb639e394400807f"} Oct 08 14:44:39 crc kubenswrapper[4624]: E1008 14:44:39.726676 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a5a6b9f9f632948acfbd0b6413783628e5a3d02be1de2c45fb639e394400807f is running failed: container process not found" containerID="a5a6b9f9f632948acfbd0b6413783628e5a3d02be1de2c45fb639e394400807f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 14:44:39 crc kubenswrapper[4624]: E1008 14:44:39.727160 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a5a6b9f9f632948acfbd0b6413783628e5a3d02be1de2c45fb639e394400807f is running failed: container process not found" containerID="a5a6b9f9f632948acfbd0b6413783628e5a3d02be1de2c45fb639e394400807f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 14:44:39 crc kubenswrapper[4624]: E1008 14:44:39.727603 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a5a6b9f9f632948acfbd0b6413783628e5a3d02be1de2c45fb639e394400807f is running failed: container process not found" containerID="a5a6b9f9f632948acfbd0b6413783628e5a3d02be1de2c45fb639e394400807f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 14:44:39 crc kubenswrapper[4624]: E1008 14:44:39.727666 4624 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a5a6b9f9f632948acfbd0b6413783628e5a3d02be1de2c45fb639e394400807f is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1a467f43-58ee-491c-8f01-fcef84e2e3ae" containerName="nova-scheduler-scheduler" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.149283 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.217072 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a467f43-58ee-491c-8f01-fcef84e2e3ae-config-data\") pod \"1a467f43-58ee-491c-8f01-fcef84e2e3ae\" (UID: \"1a467f43-58ee-491c-8f01-fcef84e2e3ae\") " Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.217159 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7w9d\" (UniqueName: \"kubernetes.io/projected/1a467f43-58ee-491c-8f01-fcef84e2e3ae-kube-api-access-f7w9d\") pod \"1a467f43-58ee-491c-8f01-fcef84e2e3ae\" (UID: \"1a467f43-58ee-491c-8f01-fcef84e2e3ae\") " Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.217332 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a467f43-58ee-491c-8f01-fcef84e2e3ae-combined-ca-bundle\") pod \"1a467f43-58ee-491c-8f01-fcef84e2e3ae\" (UID: \"1a467f43-58ee-491c-8f01-fcef84e2e3ae\") " Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.239881 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a467f43-58ee-491c-8f01-fcef84e2e3ae-kube-api-access-f7w9d" (OuterVolumeSpecName: "kube-api-access-f7w9d") pod "1a467f43-58ee-491c-8f01-fcef84e2e3ae" (UID: "1a467f43-58ee-491c-8f01-fcef84e2e3ae"). InnerVolumeSpecName "kube-api-access-f7w9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.260868 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a467f43-58ee-491c-8f01-fcef84e2e3ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a467f43-58ee-491c-8f01-fcef84e2e3ae" (UID: "1a467f43-58ee-491c-8f01-fcef84e2e3ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.271353 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a467f43-58ee-491c-8f01-fcef84e2e3ae-config-data" (OuterVolumeSpecName: "config-data") pod "1a467f43-58ee-491c-8f01-fcef84e2e3ae" (UID: "1a467f43-58ee-491c-8f01-fcef84e2e3ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.320426 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a467f43-58ee-491c-8f01-fcef84e2e3ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.320459 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a467f43-58ee-491c-8f01-fcef84e2e3ae-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.320469 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7w9d\" (UniqueName: \"kubernetes.io/projected/1a467f43-58ee-491c-8f01-fcef84e2e3ae-kube-api-access-f7w9d\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.388656 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.388660 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1a467f43-58ee-491c-8f01-fcef84e2e3ae","Type":"ContainerDied","Data":"42919b81178245a15d7a61fc4594a0047d55d117773c0c1db1291d3d5df1428c"} Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.388794 4624 scope.go:117] "RemoveContainer" containerID="a5a6b9f9f632948acfbd0b6413783628e5a3d02be1de2c45fb639e394400807f" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.391919 4624 generic.go:334] "Generic (PLEG): container finished" podID="45712564-9cf2-4eb6-ad8f-96732e8823a4" containerID="0f1c4d28c0bf3b83fa0b46f7f08582d92bd4b49d18f9016df2b601fd6e56d856" exitCode=0 Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.392232 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45712564-9cf2-4eb6-ad8f-96732e8823a4","Type":"ContainerDied","Data":"0f1c4d28c0bf3b83fa0b46f7f08582d92bd4b49d18f9016df2b601fd6e56d856"} Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.443775 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.453765 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.473856 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 14:44:40 crc kubenswrapper[4624]: E1008 14:44:40.476551 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b6d75e-5f52-46b7-ad81-0efc8ae08807" containerName="init" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.476580 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b6d75e-5f52-46b7-ad81-0efc8ae08807" containerName="init" Oct 08 14:44:40 crc kubenswrapper[4624]: E1008 14:44:40.476627 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b6d75e-5f52-46b7-ad81-0efc8ae08807" containerName="dnsmasq-dns" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.476654 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b6d75e-5f52-46b7-ad81-0efc8ae08807" containerName="dnsmasq-dns" Oct 08 14:44:40 crc kubenswrapper[4624]: E1008 14:44:40.476687 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a467f43-58ee-491c-8f01-fcef84e2e3ae" containerName="nova-scheduler-scheduler" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.476694 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a467f43-58ee-491c-8f01-fcef84e2e3ae" containerName="nova-scheduler-scheduler" Oct 08 14:44:40 crc kubenswrapper[4624]: E1008 14:44:40.476718 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae634848-a6be-4622-96b1-36c4f0427893" containerName="nova-manage" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.476735 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae634848-a6be-4622-96b1-36c4f0427893" containerName="nova-manage" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.478488 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a467f43-58ee-491c-8f01-fcef84e2e3ae" containerName="nova-scheduler-scheduler" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.478528 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae634848-a6be-4622-96b1-36c4f0427893" containerName="nova-manage" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.478550 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="47b6d75e-5f52-46b7-ad81-0efc8ae08807" containerName="dnsmasq-dns" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.487262 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.490823 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.502065 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.653492 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31486d82-7d6f-47c1-9cbd-7e7f9fc23638-config-data\") pod \"nova-scheduler-0\" (UID: \"31486d82-7d6f-47c1-9cbd-7e7f9fc23638\") " pod="openstack/nova-scheduler-0" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.653735 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31486d82-7d6f-47c1-9cbd-7e7f9fc23638-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"31486d82-7d6f-47c1-9cbd-7e7f9fc23638\") " pod="openstack/nova-scheduler-0" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.653869 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g8cv\" (UniqueName: \"kubernetes.io/projected/31486d82-7d6f-47c1-9cbd-7e7f9fc23638-kube-api-access-5g8cv\") pod \"nova-scheduler-0\" (UID: \"31486d82-7d6f-47c1-9cbd-7e7f9fc23638\") " pod="openstack/nova-scheduler-0" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.758000 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31486d82-7d6f-47c1-9cbd-7e7f9fc23638-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"31486d82-7d6f-47c1-9cbd-7e7f9fc23638\") " pod="openstack/nova-scheduler-0" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.758162 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g8cv\" (UniqueName: \"kubernetes.io/projected/31486d82-7d6f-47c1-9cbd-7e7f9fc23638-kube-api-access-5g8cv\") pod \"nova-scheduler-0\" (UID: \"31486d82-7d6f-47c1-9cbd-7e7f9fc23638\") " pod="openstack/nova-scheduler-0" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.758351 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31486d82-7d6f-47c1-9cbd-7e7f9fc23638-config-data\") pod \"nova-scheduler-0\" (UID: \"31486d82-7d6f-47c1-9cbd-7e7f9fc23638\") " pod="openstack/nova-scheduler-0" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.766375 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31486d82-7d6f-47c1-9cbd-7e7f9fc23638-config-data\") pod \"nova-scheduler-0\" (UID: \"31486d82-7d6f-47c1-9cbd-7e7f9fc23638\") " pod="openstack/nova-scheduler-0" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.767719 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31486d82-7d6f-47c1-9cbd-7e7f9fc23638-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"31486d82-7d6f-47c1-9cbd-7e7f9fc23638\") " pod="openstack/nova-scheduler-0" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.781476 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g8cv\" (UniqueName: \"kubernetes.io/projected/31486d82-7d6f-47c1-9cbd-7e7f9fc23638-kube-api-access-5g8cv\") pod \"nova-scheduler-0\" (UID: \"31486d82-7d6f-47c1-9cbd-7e7f9fc23638\") " pod="openstack/nova-scheduler-0" Oct 08 14:44:40 crc kubenswrapper[4624]: I1008 14:44:40.819031 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.162535 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.276458 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvm85\" (UniqueName: \"kubernetes.io/projected/45712564-9cf2-4eb6-ad8f-96732e8823a4-kube-api-access-wvm85\") pod \"45712564-9cf2-4eb6-ad8f-96732e8823a4\" (UID: \"45712564-9cf2-4eb6-ad8f-96732e8823a4\") " Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.276532 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45712564-9cf2-4eb6-ad8f-96732e8823a4-config-data\") pod \"45712564-9cf2-4eb6-ad8f-96732e8823a4\" (UID: \"45712564-9cf2-4eb6-ad8f-96732e8823a4\") " Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.276764 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45712564-9cf2-4eb6-ad8f-96732e8823a4-logs\") pod \"45712564-9cf2-4eb6-ad8f-96732e8823a4\" (UID: \"45712564-9cf2-4eb6-ad8f-96732e8823a4\") " Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.276829 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45712564-9cf2-4eb6-ad8f-96732e8823a4-combined-ca-bundle\") pod \"45712564-9cf2-4eb6-ad8f-96732e8823a4\" (UID: \"45712564-9cf2-4eb6-ad8f-96732e8823a4\") " Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.278805 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45712564-9cf2-4eb6-ad8f-96732e8823a4-logs" (OuterVolumeSpecName: "logs") pod "45712564-9cf2-4eb6-ad8f-96732e8823a4" (UID: "45712564-9cf2-4eb6-ad8f-96732e8823a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.279492 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45712564-9cf2-4eb6-ad8f-96732e8823a4-logs\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.284674 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45712564-9cf2-4eb6-ad8f-96732e8823a4-kube-api-access-wvm85" (OuterVolumeSpecName: "kube-api-access-wvm85") pod "45712564-9cf2-4eb6-ad8f-96732e8823a4" (UID: "45712564-9cf2-4eb6-ad8f-96732e8823a4"). InnerVolumeSpecName "kube-api-access-wvm85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.307549 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45712564-9cf2-4eb6-ad8f-96732e8823a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45712564-9cf2-4eb6-ad8f-96732e8823a4" (UID: "45712564-9cf2-4eb6-ad8f-96732e8823a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.317484 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45712564-9cf2-4eb6-ad8f-96732e8823a4-config-data" (OuterVolumeSpecName: "config-data") pod "45712564-9cf2-4eb6-ad8f-96732e8823a4" (UID: "45712564-9cf2-4eb6-ad8f-96732e8823a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.380971 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45712564-9cf2-4eb6-ad8f-96732e8823a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.381006 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvm85\" (UniqueName: \"kubernetes.io/projected/45712564-9cf2-4eb6-ad8f-96732e8823a4-kube-api-access-wvm85\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.381017 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45712564-9cf2-4eb6-ad8f-96732e8823a4-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.403839 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45712564-9cf2-4eb6-ad8f-96732e8823a4","Type":"ContainerDied","Data":"75645983e6a8b34195ebc78a1d2193ab1f8fa1ed0e1051e33d9e9db039ae6a65"} Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.403888 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.404215 4624 scope.go:117] "RemoveContainer" containerID="0f1c4d28c0bf3b83fa0b46f7f08582d92bd4b49d18f9016df2b601fd6e56d856" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.428516 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.448421 4624 scope.go:117] "RemoveContainer" containerID="7918a58a91ddde7ea36488e755cc3e8973664c8c63ef024f04f93535a6d2c1db" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.453587 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.517178 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a467f43-58ee-491c-8f01-fcef84e2e3ae" path="/var/lib/kubelet/pods/1a467f43-58ee-491c-8f01-fcef84e2e3ae/volumes" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.518041 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.519838 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 08 14:44:41 crc kubenswrapper[4624]: E1008 14:44:41.521321 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45712564-9cf2-4eb6-ad8f-96732e8823a4" containerName="nova-api-log" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.521350 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="45712564-9cf2-4eb6-ad8f-96732e8823a4" containerName="nova-api-log" Oct 08 14:44:41 crc kubenswrapper[4624]: E1008 14:44:41.521386 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45712564-9cf2-4eb6-ad8f-96732e8823a4" containerName="nova-api-api" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.521395 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="45712564-9cf2-4eb6-ad8f-96732e8823a4" containerName="nova-api-api" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.522574 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="45712564-9cf2-4eb6-ad8f-96732e8823a4" containerName="nova-api-api" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.522610 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="45712564-9cf2-4eb6-ad8f-96732e8823a4" containerName="nova-api-log" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.523715 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.527381 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.552494 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.686691 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8213205-980a-47d7-8940-a4ecef1b3007-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8213205-980a-47d7-8940-a4ecef1b3007\") " pod="openstack/nova-api-0" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.686957 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8213205-980a-47d7-8940-a4ecef1b3007-logs\") pod \"nova-api-0\" (UID: \"d8213205-980a-47d7-8940-a4ecef1b3007\") " pod="openstack/nova-api-0" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.687036 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8213205-980a-47d7-8940-a4ecef1b3007-config-data\") pod \"nova-api-0\" (UID: \"d8213205-980a-47d7-8940-a4ecef1b3007\") " pod="openstack/nova-api-0" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.687102 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg4t6\" (UniqueName: \"kubernetes.io/projected/d8213205-980a-47d7-8940-a4ecef1b3007-kube-api-access-pg4t6\") pod \"nova-api-0\" (UID: \"d8213205-980a-47d7-8940-a4ecef1b3007\") " pod="openstack/nova-api-0" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.788611 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8213205-980a-47d7-8940-a4ecef1b3007-config-data\") pod \"nova-api-0\" (UID: \"d8213205-980a-47d7-8940-a4ecef1b3007\") " pod="openstack/nova-api-0" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.788708 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg4t6\" (UniqueName: \"kubernetes.io/projected/d8213205-980a-47d7-8940-a4ecef1b3007-kube-api-access-pg4t6\") pod \"nova-api-0\" (UID: \"d8213205-980a-47d7-8940-a4ecef1b3007\") " pod="openstack/nova-api-0" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.788752 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8213205-980a-47d7-8940-a4ecef1b3007-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8213205-980a-47d7-8940-a4ecef1b3007\") " pod="openstack/nova-api-0" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.788872 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8213205-980a-47d7-8940-a4ecef1b3007-logs\") pod \"nova-api-0\" (UID: \"d8213205-980a-47d7-8940-a4ecef1b3007\") " pod="openstack/nova-api-0" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.789317 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8213205-980a-47d7-8940-a4ecef1b3007-logs\") pod \"nova-api-0\" (UID: \"d8213205-980a-47d7-8940-a4ecef1b3007\") " pod="openstack/nova-api-0" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.792476 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8213205-980a-47d7-8940-a4ecef1b3007-config-data\") pod \"nova-api-0\" (UID: \"d8213205-980a-47d7-8940-a4ecef1b3007\") " pod="openstack/nova-api-0" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.793991 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8213205-980a-47d7-8940-a4ecef1b3007-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8213205-980a-47d7-8940-a4ecef1b3007\") " pod="openstack/nova-api-0" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.808259 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg4t6\" (UniqueName: \"kubernetes.io/projected/d8213205-980a-47d7-8940-a4ecef1b3007-kube-api-access-pg4t6\") pod \"nova-api-0\" (UID: \"d8213205-980a-47d7-8940-a4ecef1b3007\") " pod="openstack/nova-api-0" Oct 08 14:44:41 crc kubenswrapper[4624]: I1008 14:44:41.855264 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 14:44:42 crc kubenswrapper[4624]: I1008 14:44:42.349303 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 14:44:42 crc kubenswrapper[4624]: W1008 14:44:42.353163 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8213205_980a_47d7_8940_a4ecef1b3007.slice/crio-d83cb888ad25dc2a9683e1db4cecd2fb17cf824546d6d3bf2a91e3653cc90732 WatchSource:0}: Error finding container d83cb888ad25dc2a9683e1db4cecd2fb17cf824546d6d3bf2a91e3653cc90732: Status 404 returned error can't find the container with id d83cb888ad25dc2a9683e1db4cecd2fb17cf824546d6d3bf2a91e3653cc90732 Oct 08 14:44:42 crc kubenswrapper[4624]: I1008 14:44:42.413436 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8213205-980a-47d7-8940-a4ecef1b3007","Type":"ContainerStarted","Data":"d83cb888ad25dc2a9683e1db4cecd2fb17cf824546d6d3bf2a91e3653cc90732"} Oct 08 14:44:42 crc kubenswrapper[4624]: I1008 14:44:42.415595 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"31486d82-7d6f-47c1-9cbd-7e7f9fc23638","Type":"ContainerStarted","Data":"8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c"} Oct 08 14:44:42 crc kubenswrapper[4624]: I1008 14:44:42.415671 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"31486d82-7d6f-47c1-9cbd-7e7f9fc23638","Type":"ContainerStarted","Data":"a714080266758233507b8f7d3751e7b505aa383730fda3a1d21fe0a4f99ee0ac"} Oct 08 14:44:42 crc kubenswrapper[4624]: I1008 14:44:42.441965 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.441941285 podStartE2EDuration="2.441941285s" podCreationTimestamp="2025-10-08 14:44:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:44:42.43499479 +0000 UTC m=+1307.585929867" watchObservedRunningTime="2025-10-08 14:44:42.441941285 +0000 UTC m=+1307.592876362" Oct 08 14:44:43 crc kubenswrapper[4624]: I1008 14:44:43.427907 4624 generic.go:334] "Generic (PLEG): container finished" podID="5e779391-b6f0-47f2-b670-0267f571c9ff" containerID="b4a70bc451a76630b0c6fe0461307069449dc7dcd8f6128bb1b83fc8dfa501b7" exitCode=0 Oct 08 14:44:43 crc kubenswrapper[4624]: I1008 14:44:43.428181 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t62tf" event={"ID":"5e779391-b6f0-47f2-b670-0267f571c9ff","Type":"ContainerDied","Data":"b4a70bc451a76630b0c6fe0461307069449dc7dcd8f6128bb1b83fc8dfa501b7"} Oct 08 14:44:43 crc kubenswrapper[4624]: I1008 14:44:43.438576 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8213205-980a-47d7-8940-a4ecef1b3007","Type":"ContainerStarted","Data":"e5c4a05da9ebb60444033c2bfa31a2f306843880933c1d6c5e680b06047ba351"} Oct 08 14:44:43 crc kubenswrapper[4624]: I1008 14:44:43.438781 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8213205-980a-47d7-8940-a4ecef1b3007","Type":"ContainerStarted","Data":"240ee102fe3cd8eae9e18023c1c5943036d050cec8274dc32ad60722ce99238a"} Oct 08 14:44:43 crc kubenswrapper[4624]: I1008 14:44:43.469294 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.46927697 podStartE2EDuration="2.46927697s" podCreationTimestamp="2025-10-08 14:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:44:43.460617732 +0000 UTC m=+1308.611552809" watchObservedRunningTime="2025-10-08 14:44:43.46927697 +0000 UTC m=+1308.620212047" Oct 08 14:44:43 crc kubenswrapper[4624]: I1008 14:44:43.475836 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45712564-9cf2-4eb6-ad8f-96732e8823a4" path="/var/lib/kubelet/pods/45712564-9cf2-4eb6-ad8f-96732e8823a4/volumes" Oct 08 14:44:44 crc kubenswrapper[4624]: I1008 14:44:44.612069 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f6cd65c74-7vqb5" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Oct 08 14:44:44 crc kubenswrapper[4624]: I1008 14:44:44.839946 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t62tf" Oct 08 14:44:44 crc kubenswrapper[4624]: I1008 14:44:44.963022 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e779391-b6f0-47f2-b670-0267f571c9ff-combined-ca-bundle\") pod \"5e779391-b6f0-47f2-b670-0267f571c9ff\" (UID: \"5e779391-b6f0-47f2-b670-0267f571c9ff\") " Oct 08 14:44:44 crc kubenswrapper[4624]: I1008 14:44:44.963083 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e779391-b6f0-47f2-b670-0267f571c9ff-scripts\") pod \"5e779391-b6f0-47f2-b670-0267f571c9ff\" (UID: \"5e779391-b6f0-47f2-b670-0267f571c9ff\") " Oct 08 14:44:44 crc kubenswrapper[4624]: I1008 14:44:44.963108 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e779391-b6f0-47f2-b670-0267f571c9ff-config-data\") pod \"5e779391-b6f0-47f2-b670-0267f571c9ff\" (UID: \"5e779391-b6f0-47f2-b670-0267f571c9ff\") " Oct 08 14:44:44 crc kubenswrapper[4624]: I1008 14:44:44.963129 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggz5h\" (UniqueName: \"kubernetes.io/projected/5e779391-b6f0-47f2-b670-0267f571c9ff-kube-api-access-ggz5h\") pod \"5e779391-b6f0-47f2-b670-0267f571c9ff\" (UID: \"5e779391-b6f0-47f2-b670-0267f571c9ff\") " Oct 08 14:44:44 crc kubenswrapper[4624]: I1008 14:44:44.968191 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e779391-b6f0-47f2-b670-0267f571c9ff-scripts" (OuterVolumeSpecName: "scripts") pod "5e779391-b6f0-47f2-b670-0267f571c9ff" (UID: "5e779391-b6f0-47f2-b670-0267f571c9ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:44 crc kubenswrapper[4624]: I1008 14:44:44.968560 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e779391-b6f0-47f2-b670-0267f571c9ff-kube-api-access-ggz5h" (OuterVolumeSpecName: "kube-api-access-ggz5h") pod "5e779391-b6f0-47f2-b670-0267f571c9ff" (UID: "5e779391-b6f0-47f2-b670-0267f571c9ff"). InnerVolumeSpecName "kube-api-access-ggz5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:44:44 crc kubenswrapper[4624]: I1008 14:44:44.994835 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e779391-b6f0-47f2-b670-0267f571c9ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e779391-b6f0-47f2-b670-0267f571c9ff" (UID: "5e779391-b6f0-47f2-b670-0267f571c9ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:44 crc kubenswrapper[4624]: I1008 14:44:44.997324 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e779391-b6f0-47f2-b670-0267f571c9ff-config-data" (OuterVolumeSpecName: "config-data") pod "5e779391-b6f0-47f2-b670-0267f571c9ff" (UID: "5e779391-b6f0-47f2-b670-0267f571c9ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.066538 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e779391-b6f0-47f2-b670-0267f571c9ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.068221 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e779391-b6f0-47f2-b670-0267f571c9ff-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.068233 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e779391-b6f0-47f2-b670-0267f571c9ff-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.068242 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggz5h\" (UniqueName: \"kubernetes.io/projected/5e779391-b6f0-47f2-b670-0267f571c9ff-kube-api-access-ggz5h\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.456995 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t62tf" event={"ID":"5e779391-b6f0-47f2-b670-0267f571c9ff","Type":"ContainerDied","Data":"8f6ad6979918e11ca1e86ecaa882272e4f3f46d238baa7f7527d905bbe21c2fb"} Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.457354 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f6ad6979918e11ca1e86ecaa882272e4f3f46d238baa7f7527d905bbe21c2fb" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.457061 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t62tf" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.536527 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 14:44:45 crc kubenswrapper[4624]: E1008 14:44:45.537044 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e779391-b6f0-47f2-b670-0267f571c9ff" containerName="nova-cell1-conductor-db-sync" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.537070 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e779391-b6f0-47f2-b670-0267f571c9ff" containerName="nova-cell1-conductor-db-sync" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.537289 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e779391-b6f0-47f2-b670-0267f571c9ff" containerName="nova-cell1-conductor-db-sync" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.538458 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.542398 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.549162 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.680546 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q7jh\" (UniqueName: \"kubernetes.io/projected/5b3bb7e0-38ab-4767-98ed-c1a79a46851f-kube-api-access-9q7jh\") pod \"nova-cell1-conductor-0\" (UID: \"5b3bb7e0-38ab-4767-98ed-c1a79a46851f\") " pod="openstack/nova-cell1-conductor-0" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.680626 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3bb7e0-38ab-4767-98ed-c1a79a46851f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5b3bb7e0-38ab-4767-98ed-c1a79a46851f\") " pod="openstack/nova-cell1-conductor-0" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.680676 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3bb7e0-38ab-4767-98ed-c1a79a46851f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5b3bb7e0-38ab-4767-98ed-c1a79a46851f\") " pod="openstack/nova-cell1-conductor-0" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.784297 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3bb7e0-38ab-4767-98ed-c1a79a46851f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5b3bb7e0-38ab-4767-98ed-c1a79a46851f\") " pod="openstack/nova-cell1-conductor-0" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.784488 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q7jh\" (UniqueName: \"kubernetes.io/projected/5b3bb7e0-38ab-4767-98ed-c1a79a46851f-kube-api-access-9q7jh\") pod \"nova-cell1-conductor-0\" (UID: \"5b3bb7e0-38ab-4767-98ed-c1a79a46851f\") " pod="openstack/nova-cell1-conductor-0" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.784578 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3bb7e0-38ab-4767-98ed-c1a79a46851f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5b3bb7e0-38ab-4767-98ed-c1a79a46851f\") " pod="openstack/nova-cell1-conductor-0" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.788188 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3bb7e0-38ab-4767-98ed-c1a79a46851f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5b3bb7e0-38ab-4767-98ed-c1a79a46851f\") " pod="openstack/nova-cell1-conductor-0" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.788439 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3bb7e0-38ab-4767-98ed-c1a79a46851f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5b3bb7e0-38ab-4767-98ed-c1a79a46851f\") " pod="openstack/nova-cell1-conductor-0" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.810170 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q7jh\" (UniqueName: \"kubernetes.io/projected/5b3bb7e0-38ab-4767-98ed-c1a79a46851f-kube-api-access-9q7jh\") pod \"nova-cell1-conductor-0\" (UID: \"5b3bb7e0-38ab-4767-98ed-c1a79a46851f\") " pod="openstack/nova-cell1-conductor-0" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.820577 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 08 14:44:45 crc kubenswrapper[4624]: I1008 14:44:45.861978 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 08 14:44:46 crc kubenswrapper[4624]: I1008 14:44:46.360747 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 14:44:46 crc kubenswrapper[4624]: I1008 14:44:46.470199 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5b3bb7e0-38ab-4767-98ed-c1a79a46851f","Type":"ContainerStarted","Data":"8badd06940ca420d1c043ed02cb447498547db85653b2b8bc855639187b647c0"} Oct 08 14:44:47 crc kubenswrapper[4624]: I1008 14:44:47.478842 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5b3bb7e0-38ab-4767-98ed-c1a79a46851f","Type":"ContainerStarted","Data":"5e283af37abe8cc1b903e34d552f71ecef2a54e10503be92a75be975b3ef67e2"} Oct 08 14:44:47 crc kubenswrapper[4624]: I1008 14:44:47.478934 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 08 14:44:47 crc kubenswrapper[4624]: I1008 14:44:47.499879 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.499842354 podStartE2EDuration="2.499842354s" podCreationTimestamp="2025-10-08 14:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:44:47.492303814 +0000 UTC m=+1312.643238891" watchObservedRunningTime="2025-10-08 14:44:47.499842354 +0000 UTC m=+1312.650777431" Oct 08 14:44:50 crc kubenswrapper[4624]: I1008 14:44:50.820164 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 08 14:44:50 crc kubenswrapper[4624]: I1008 14:44:50.849284 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 08 14:44:51 crc kubenswrapper[4624]: I1008 14:44:51.543489 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 08 14:44:51 crc kubenswrapper[4624]: I1008 14:44:51.857186 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 14:44:51 crc kubenswrapper[4624]: I1008 14:44:51.857232 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 14:44:52 crc kubenswrapper[4624]: I1008 14:44:52.939928 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d8213205-980a-47d7-8940-a4ecef1b3007" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.209:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 14:44:52 crc kubenswrapper[4624]: I1008 14:44:52.940751 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d8213205-980a-47d7-8940-a4ecef1b3007" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.209:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 14:44:54 crc kubenswrapper[4624]: I1008 14:44:54.612196 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f6cd65c74-7vqb5" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Oct 08 14:44:54 crc kubenswrapper[4624]: I1008 14:44:54.612307 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:44:55 crc kubenswrapper[4624]: I1008 14:44:55.891867 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.557805 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.570504 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.575677 4624 generic.go:334] "Generic (PLEG): container finished" podID="bb4b4833-604d-4011-a7b6-668f7116028b" containerID="6d2f4c7d2b8a32f9f1833ccfbdb7a4303ff22a0b460ddd02f96b52971474f918" exitCode=137 Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.575751 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb4b4833-604d-4011-a7b6-668f7116028b","Type":"ContainerDied","Data":"6d2f4c7d2b8a32f9f1833ccfbdb7a4303ff22a0b460ddd02f96b52971474f918"} Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.575782 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb4b4833-604d-4011-a7b6-668f7116028b","Type":"ContainerDied","Data":"0b46bd251841817b2205adc11c34a69273df0dde47b1cd834637d45d2d277085"} Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.575815 4624 scope.go:117] "RemoveContainer" containerID="6d2f4c7d2b8a32f9f1833ccfbdb7a4303ff22a0b460ddd02f96b52971474f918" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.575948 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.585536 4624 generic.go:334] "Generic (PLEG): container finished" podID="5217c8fb-0a56-4f45-8c0e-5465edc9e9ab" containerID="7d67088a8e70e39025ec2fcd5781ed070057561f93ba99d763c917c19f199f39" exitCode=137 Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.585581 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab","Type":"ContainerDied","Data":"7d67088a8e70e39025ec2fcd5781ed070057561f93ba99d763c917c19f199f39"} Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.585607 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab","Type":"ContainerDied","Data":"fc86d8cda0d3f338064b26978077162d6e49205fc54d24d6b219bef215e555d4"} Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.585690 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.621325 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr2vc\" (UniqueName: \"kubernetes.io/projected/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab-kube-api-access-kr2vc\") pod \"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab\" (UID: \"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab\") " Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.621406 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab-combined-ca-bundle\") pod \"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab\" (UID: \"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab\") " Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.621476 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab-config-data\") pod \"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab\" (UID: \"5217c8fb-0a56-4f45-8c0e-5465edc9e9ab\") " Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.626620 4624 scope.go:117] "RemoveContainer" containerID="3943b4f502bc853825d22458db108b1df4c28ac6a45bfc875922595e263d248b" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.654380 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab-config-data" (OuterVolumeSpecName: "config-data") pod "5217c8fb-0a56-4f45-8c0e-5465edc9e9ab" (UID: "5217c8fb-0a56-4f45-8c0e-5465edc9e9ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.656967 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab-kube-api-access-kr2vc" (OuterVolumeSpecName: "kube-api-access-kr2vc") pod "5217c8fb-0a56-4f45-8c0e-5465edc9e9ab" (UID: "5217c8fb-0a56-4f45-8c0e-5465edc9e9ab"). InnerVolumeSpecName "kube-api-access-kr2vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.686994 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5217c8fb-0a56-4f45-8c0e-5465edc9e9ab" (UID: "5217c8fb-0a56-4f45-8c0e-5465edc9e9ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.726811 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4b4833-604d-4011-a7b6-668f7116028b-combined-ca-bundle\") pod \"bb4b4833-604d-4011-a7b6-668f7116028b\" (UID: \"bb4b4833-604d-4011-a7b6-668f7116028b\") " Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.726940 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9n7g\" (UniqueName: \"kubernetes.io/projected/bb4b4833-604d-4011-a7b6-668f7116028b-kube-api-access-l9n7g\") pod \"bb4b4833-604d-4011-a7b6-668f7116028b\" (UID: \"bb4b4833-604d-4011-a7b6-668f7116028b\") " Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.727049 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4b4833-604d-4011-a7b6-668f7116028b-config-data\") pod \"bb4b4833-604d-4011-a7b6-668f7116028b\" (UID: \"bb4b4833-604d-4011-a7b6-668f7116028b\") " Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.727196 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4b4833-604d-4011-a7b6-668f7116028b-logs\") pod \"bb4b4833-604d-4011-a7b6-668f7116028b\" (UID: \"bb4b4833-604d-4011-a7b6-668f7116028b\") " Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.727695 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr2vc\" (UniqueName: \"kubernetes.io/projected/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab-kube-api-access-kr2vc\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.727716 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.727728 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.728348 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb4b4833-604d-4011-a7b6-668f7116028b-logs" (OuterVolumeSpecName: "logs") pod "bb4b4833-604d-4011-a7b6-668f7116028b" (UID: "bb4b4833-604d-4011-a7b6-668f7116028b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.730595 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb4b4833-604d-4011-a7b6-668f7116028b-kube-api-access-l9n7g" (OuterVolumeSpecName: "kube-api-access-l9n7g") pod "bb4b4833-604d-4011-a7b6-668f7116028b" (UID: "bb4b4833-604d-4011-a7b6-668f7116028b"). InnerVolumeSpecName "kube-api-access-l9n7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.741658 4624 scope.go:117] "RemoveContainer" containerID="6d2f4c7d2b8a32f9f1833ccfbdb7a4303ff22a0b460ddd02f96b52971474f918" Oct 08 14:44:57 crc kubenswrapper[4624]: E1008 14:44:57.742160 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d2f4c7d2b8a32f9f1833ccfbdb7a4303ff22a0b460ddd02f96b52971474f918\": container with ID starting with 6d2f4c7d2b8a32f9f1833ccfbdb7a4303ff22a0b460ddd02f96b52971474f918 not found: ID does not exist" containerID="6d2f4c7d2b8a32f9f1833ccfbdb7a4303ff22a0b460ddd02f96b52971474f918" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.742218 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d2f4c7d2b8a32f9f1833ccfbdb7a4303ff22a0b460ddd02f96b52971474f918"} err="failed to get container status \"6d2f4c7d2b8a32f9f1833ccfbdb7a4303ff22a0b460ddd02f96b52971474f918\": rpc error: code = NotFound desc = could not find container \"6d2f4c7d2b8a32f9f1833ccfbdb7a4303ff22a0b460ddd02f96b52971474f918\": container with ID starting with 6d2f4c7d2b8a32f9f1833ccfbdb7a4303ff22a0b460ddd02f96b52971474f918 not found: ID does not exist" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.742253 4624 scope.go:117] "RemoveContainer" containerID="3943b4f502bc853825d22458db108b1df4c28ac6a45bfc875922595e263d248b" Oct 08 14:44:57 crc kubenswrapper[4624]: E1008 14:44:57.742894 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3943b4f502bc853825d22458db108b1df4c28ac6a45bfc875922595e263d248b\": container with ID starting with 3943b4f502bc853825d22458db108b1df4c28ac6a45bfc875922595e263d248b not found: ID does not exist" containerID="3943b4f502bc853825d22458db108b1df4c28ac6a45bfc875922595e263d248b" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.742933 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3943b4f502bc853825d22458db108b1df4c28ac6a45bfc875922595e263d248b"} err="failed to get container status \"3943b4f502bc853825d22458db108b1df4c28ac6a45bfc875922595e263d248b\": rpc error: code = NotFound desc = could not find container \"3943b4f502bc853825d22458db108b1df4c28ac6a45bfc875922595e263d248b\": container with ID starting with 3943b4f502bc853825d22458db108b1df4c28ac6a45bfc875922595e263d248b not found: ID does not exist" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.742962 4624 scope.go:117] "RemoveContainer" containerID="7d67088a8e70e39025ec2fcd5781ed070057561f93ba99d763c917c19f199f39" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.754139 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4b4833-604d-4011-a7b6-668f7116028b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb4b4833-604d-4011-a7b6-668f7116028b" (UID: "bb4b4833-604d-4011-a7b6-668f7116028b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.758351 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4b4833-604d-4011-a7b6-668f7116028b-config-data" (OuterVolumeSpecName: "config-data") pod "bb4b4833-604d-4011-a7b6-668f7116028b" (UID: "bb4b4833-604d-4011-a7b6-668f7116028b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.764132 4624 scope.go:117] "RemoveContainer" containerID="7d67088a8e70e39025ec2fcd5781ed070057561f93ba99d763c917c19f199f39" Oct 08 14:44:57 crc kubenswrapper[4624]: E1008 14:44:57.764834 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d67088a8e70e39025ec2fcd5781ed070057561f93ba99d763c917c19f199f39\": container with ID starting with 7d67088a8e70e39025ec2fcd5781ed070057561f93ba99d763c917c19f199f39 not found: ID does not exist" containerID="7d67088a8e70e39025ec2fcd5781ed070057561f93ba99d763c917c19f199f39" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.764873 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d67088a8e70e39025ec2fcd5781ed070057561f93ba99d763c917c19f199f39"} err="failed to get container status \"7d67088a8e70e39025ec2fcd5781ed070057561f93ba99d763c917c19f199f39\": rpc error: code = NotFound desc = could not find container \"7d67088a8e70e39025ec2fcd5781ed070057561f93ba99d763c917c19f199f39\": container with ID starting with 7d67088a8e70e39025ec2fcd5781ed070057561f93ba99d763c917c19f199f39 not found: ID does not exist" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.829335 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4b4833-604d-4011-a7b6-668f7116028b-logs\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.829614 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4b4833-604d-4011-a7b6-668f7116028b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.829707 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9n7g\" (UniqueName: \"kubernetes.io/projected/bb4b4833-604d-4011-a7b6-668f7116028b-kube-api-access-l9n7g\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.829775 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4b4833-604d-4011-a7b6-668f7116028b-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.913481 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.925360 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.934806 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.947390 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.955969 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 14:44:57 crc kubenswrapper[4624]: E1008 14:44:57.956587 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4b4833-604d-4011-a7b6-668f7116028b" containerName="nova-metadata-log" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.956700 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4b4833-604d-4011-a7b6-668f7116028b" containerName="nova-metadata-log" Oct 08 14:44:57 crc kubenswrapper[4624]: E1008 14:44:57.956791 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4b4833-604d-4011-a7b6-668f7116028b" containerName="nova-metadata-metadata" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.956846 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4b4833-604d-4011-a7b6-668f7116028b" containerName="nova-metadata-metadata" Oct 08 14:44:57 crc kubenswrapper[4624]: E1008 14:44:57.956908 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5217c8fb-0a56-4f45-8c0e-5465edc9e9ab" containerName="nova-cell1-novncproxy-novncproxy" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.956965 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="5217c8fb-0a56-4f45-8c0e-5465edc9e9ab" containerName="nova-cell1-novncproxy-novncproxy" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.957227 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="5217c8fb-0a56-4f45-8c0e-5465edc9e9ab" containerName="nova-cell1-novncproxy-novncproxy" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.957301 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb4b4833-604d-4011-a7b6-668f7116028b" containerName="nova-metadata-log" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.957382 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb4b4833-604d-4011-a7b6-668f7116028b" containerName="nova-metadata-metadata" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.958060 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.960447 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.962796 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.963005 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.964008 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.965923 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.967534 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.968579 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.974373 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 14:44:57 crc kubenswrapper[4624]: I1008 14:44:57.982276 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.034722 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/025409f2-bc81-4fb0-880c-8bb405abfdc4-logs\") pod \"nova-metadata-0\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " pod="openstack/nova-metadata-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.034777 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/166fd0ae-7c08-4abf-aad9-ec8c11629078-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"166fd0ae-7c08-4abf-aad9-ec8c11629078\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.034812 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025409f2-bc81-4fb0-880c-8bb405abfdc4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " pod="openstack/nova-metadata-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.034892 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/166fd0ae-7c08-4abf-aad9-ec8c11629078-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"166fd0ae-7c08-4abf-aad9-ec8c11629078\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.034934 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/025409f2-bc81-4fb0-880c-8bb405abfdc4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " pod="openstack/nova-metadata-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.034991 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/166fd0ae-7c08-4abf-aad9-ec8c11629078-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"166fd0ae-7c08-4abf-aad9-ec8c11629078\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.035035 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/166fd0ae-7c08-4abf-aad9-ec8c11629078-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"166fd0ae-7c08-4abf-aad9-ec8c11629078\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.035056 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klsqx\" (UniqueName: \"kubernetes.io/projected/025409f2-bc81-4fb0-880c-8bb405abfdc4-kube-api-access-klsqx\") pod \"nova-metadata-0\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " pod="openstack/nova-metadata-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.035165 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntnhf\" (UniqueName: \"kubernetes.io/projected/166fd0ae-7c08-4abf-aad9-ec8c11629078-kube-api-access-ntnhf\") pod \"nova-cell1-novncproxy-0\" (UID: \"166fd0ae-7c08-4abf-aad9-ec8c11629078\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.035214 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025409f2-bc81-4fb0-880c-8bb405abfdc4-config-data\") pod \"nova-metadata-0\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " pod="openstack/nova-metadata-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.137005 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/166fd0ae-7c08-4abf-aad9-ec8c11629078-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"166fd0ae-7c08-4abf-aad9-ec8c11629078\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.137132 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/166fd0ae-7c08-4abf-aad9-ec8c11629078-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"166fd0ae-7c08-4abf-aad9-ec8c11629078\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.137156 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klsqx\" (UniqueName: \"kubernetes.io/projected/025409f2-bc81-4fb0-880c-8bb405abfdc4-kube-api-access-klsqx\") pod \"nova-metadata-0\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " pod="openstack/nova-metadata-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.137204 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntnhf\" (UniqueName: \"kubernetes.io/projected/166fd0ae-7c08-4abf-aad9-ec8c11629078-kube-api-access-ntnhf\") pod \"nova-cell1-novncproxy-0\" (UID: \"166fd0ae-7c08-4abf-aad9-ec8c11629078\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.137254 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025409f2-bc81-4fb0-880c-8bb405abfdc4-config-data\") pod \"nova-metadata-0\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " pod="openstack/nova-metadata-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.137300 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/025409f2-bc81-4fb0-880c-8bb405abfdc4-logs\") pod \"nova-metadata-0\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " pod="openstack/nova-metadata-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.137836 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/025409f2-bc81-4fb0-880c-8bb405abfdc4-logs\") pod \"nova-metadata-0\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " pod="openstack/nova-metadata-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.140005 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/166fd0ae-7c08-4abf-aad9-ec8c11629078-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"166fd0ae-7c08-4abf-aad9-ec8c11629078\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.140071 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025409f2-bc81-4fb0-880c-8bb405abfdc4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " pod="openstack/nova-metadata-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.140229 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/166fd0ae-7c08-4abf-aad9-ec8c11629078-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"166fd0ae-7c08-4abf-aad9-ec8c11629078\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.140271 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/025409f2-bc81-4fb0-880c-8bb405abfdc4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " pod="openstack/nova-metadata-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.141690 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/166fd0ae-7c08-4abf-aad9-ec8c11629078-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"166fd0ae-7c08-4abf-aad9-ec8c11629078\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.142012 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/166fd0ae-7c08-4abf-aad9-ec8c11629078-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"166fd0ae-7c08-4abf-aad9-ec8c11629078\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.143404 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/166fd0ae-7c08-4abf-aad9-ec8c11629078-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"166fd0ae-7c08-4abf-aad9-ec8c11629078\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.143681 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025409f2-bc81-4fb0-880c-8bb405abfdc4-config-data\") pod \"nova-metadata-0\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " pod="openstack/nova-metadata-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.144331 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/025409f2-bc81-4fb0-880c-8bb405abfdc4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " pod="openstack/nova-metadata-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.145016 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/166fd0ae-7c08-4abf-aad9-ec8c11629078-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"166fd0ae-7c08-4abf-aad9-ec8c11629078\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.146296 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025409f2-bc81-4fb0-880c-8bb405abfdc4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " pod="openstack/nova-metadata-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.157479 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntnhf\" (UniqueName: \"kubernetes.io/projected/166fd0ae-7c08-4abf-aad9-ec8c11629078-kube-api-access-ntnhf\") pod \"nova-cell1-novncproxy-0\" (UID: \"166fd0ae-7c08-4abf-aad9-ec8c11629078\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.160375 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klsqx\" (UniqueName: \"kubernetes.io/projected/025409f2-bc81-4fb0-880c-8bb405abfdc4-kube-api-access-klsqx\") pod \"nova-metadata-0\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " pod="openstack/nova-metadata-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.290994 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.304344 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.773723 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 14:44:58 crc kubenswrapper[4624]: I1008 14:44:58.847045 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 14:44:58 crc kubenswrapper[4624]: W1008 14:44:58.850247 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod025409f2_bc81_4fb0_880c_8bb405abfdc4.slice/crio-afbaf2bb5bf168c0af485c9f4daf4f63c608469499da9a2725133a1645f1b425 WatchSource:0}: Error finding container afbaf2bb5bf168c0af485c9f4daf4f63c608469499da9a2725133a1645f1b425: Status 404 returned error can't find the container with id afbaf2bb5bf168c0af485c9f4daf4f63c608469499da9a2725133a1645f1b425 Oct 08 14:44:59 crc kubenswrapper[4624]: I1008 14:44:59.477830 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5217c8fb-0a56-4f45-8c0e-5465edc9e9ab" path="/var/lib/kubelet/pods/5217c8fb-0a56-4f45-8c0e-5465edc9e9ab/volumes" Oct 08 14:44:59 crc kubenswrapper[4624]: I1008 14:44:59.479695 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb4b4833-604d-4011-a7b6-668f7116028b" path="/var/lib/kubelet/pods/bb4b4833-604d-4011-a7b6-668f7116028b/volumes" Oct 08 14:44:59 crc kubenswrapper[4624]: I1008 14:44:59.615296 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"025409f2-bc81-4fb0-880c-8bb405abfdc4","Type":"ContainerStarted","Data":"f6ed0d5eae4385e77f35fe4fee0903a219e0772aa21b362512b52f9488b7f6da"} Oct 08 14:44:59 crc kubenswrapper[4624]: I1008 14:44:59.615345 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"025409f2-bc81-4fb0-880c-8bb405abfdc4","Type":"ContainerStarted","Data":"0879020797ad83aa105e30ef9cdede5310517920443e0bdc439df9ffc04b871d"} Oct 08 14:44:59 crc kubenswrapper[4624]: I1008 14:44:59.615356 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"025409f2-bc81-4fb0-880c-8bb405abfdc4","Type":"ContainerStarted","Data":"afbaf2bb5bf168c0af485c9f4daf4f63c608469499da9a2725133a1645f1b425"} Oct 08 14:44:59 crc kubenswrapper[4624]: I1008 14:44:59.619411 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"166fd0ae-7c08-4abf-aad9-ec8c11629078","Type":"ContainerStarted","Data":"c69a7a4793f7a96989ae229c3c95bdc06a567aa5404c8958916c6c89b0fd10e7"} Oct 08 14:44:59 crc kubenswrapper[4624]: I1008 14:44:59.619448 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"166fd0ae-7c08-4abf-aad9-ec8c11629078","Type":"ContainerStarted","Data":"b90ceb433a53dc97458fbf5e84c05c49f761ce3dc2c5f13f3ee76a8b04eff5ac"} Oct 08 14:44:59 crc kubenswrapper[4624]: I1008 14:44:59.648318 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.648295516 podStartE2EDuration="2.648295516s" podCreationTimestamp="2025-10-08 14:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:44:59.637455053 +0000 UTC m=+1324.788390130" watchObservedRunningTime="2025-10-08 14:44:59.648295516 +0000 UTC m=+1324.799230583" Oct 08 14:44:59 crc kubenswrapper[4624]: I1008 14:44:59.662249 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.662225077 podStartE2EDuration="2.662225077s" podCreationTimestamp="2025-10-08 14:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:44:59.654747819 +0000 UTC m=+1324.805682916" watchObservedRunningTime="2025-10-08 14:44:59.662225077 +0000 UTC m=+1324.813160154" Oct 08 14:45:00 crc kubenswrapper[4624]: I1008 14:45:00.076258 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:45:00 crc kubenswrapper[4624]: I1008 14:45:00.076354 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:45:00 crc kubenswrapper[4624]: I1008 14:45:00.152128 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw"] Oct 08 14:45:00 crc kubenswrapper[4624]: I1008 14:45:00.154003 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw" Oct 08 14:45:00 crc kubenswrapper[4624]: I1008 14:45:00.156113 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 14:45:00 crc kubenswrapper[4624]: I1008 14:45:00.156789 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 14:45:00 crc kubenswrapper[4624]: I1008 14:45:00.172197 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw"] Oct 08 14:45:00 crc kubenswrapper[4624]: I1008 14:45:00.281786 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f42483b-c604-404c-8921-4157c19584bb-secret-volume\") pod \"collect-profiles-29332245-prrxw\" (UID: \"8f42483b-c604-404c-8921-4157c19584bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw" Oct 08 14:45:00 crc kubenswrapper[4624]: I1008 14:45:00.282140 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f42483b-c604-404c-8921-4157c19584bb-config-volume\") pod \"collect-profiles-29332245-prrxw\" (UID: \"8f42483b-c604-404c-8921-4157c19584bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw" Oct 08 14:45:00 crc kubenswrapper[4624]: I1008 14:45:00.282226 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm825\" (UniqueName: \"kubernetes.io/projected/8f42483b-c604-404c-8921-4157c19584bb-kube-api-access-rm825\") pod \"collect-profiles-29332245-prrxw\" (UID: \"8f42483b-c604-404c-8921-4157c19584bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw" Oct 08 14:45:00 crc kubenswrapper[4624]: I1008 14:45:00.383761 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f42483b-c604-404c-8921-4157c19584bb-secret-volume\") pod \"collect-profiles-29332245-prrxw\" (UID: \"8f42483b-c604-404c-8921-4157c19584bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw" Oct 08 14:45:00 crc kubenswrapper[4624]: I1008 14:45:00.383808 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f42483b-c604-404c-8921-4157c19584bb-config-volume\") pod \"collect-profiles-29332245-prrxw\" (UID: \"8f42483b-c604-404c-8921-4157c19584bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw" Oct 08 14:45:00 crc kubenswrapper[4624]: I1008 14:45:00.383871 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm825\" (UniqueName: \"kubernetes.io/projected/8f42483b-c604-404c-8921-4157c19584bb-kube-api-access-rm825\") pod \"collect-profiles-29332245-prrxw\" (UID: \"8f42483b-c604-404c-8921-4157c19584bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw" Oct 08 14:45:00 crc kubenswrapper[4624]: I1008 14:45:00.385717 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f42483b-c604-404c-8921-4157c19584bb-config-volume\") pod \"collect-profiles-29332245-prrxw\" (UID: \"8f42483b-c604-404c-8921-4157c19584bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw" Oct 08 14:45:00 crc kubenswrapper[4624]: I1008 14:45:00.389713 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f42483b-c604-404c-8921-4157c19584bb-secret-volume\") pod \"collect-profiles-29332245-prrxw\" (UID: \"8f42483b-c604-404c-8921-4157c19584bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw" Oct 08 14:45:00 crc kubenswrapper[4624]: I1008 14:45:00.408185 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm825\" (UniqueName: \"kubernetes.io/projected/8f42483b-c604-404c-8921-4157c19584bb-kube-api-access-rm825\") pod \"collect-profiles-29332245-prrxw\" (UID: \"8f42483b-c604-404c-8921-4157c19584bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw" Oct 08 14:45:00 crc kubenswrapper[4624]: I1008 14:45:00.502165 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw" Oct 08 14:45:00 crc kubenswrapper[4624]: I1008 14:45:00.856114 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 08 14:45:01 crc kubenswrapper[4624]: I1008 14:45:01.027495 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw"] Oct 08 14:45:01 crc kubenswrapper[4624]: I1008 14:45:01.676757 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw" event={"ID":"8f42483b-c604-404c-8921-4157c19584bb","Type":"ContainerStarted","Data":"891f491334ccca991cd1a055ee26dd97857de25b8f6d243758140e97dd0f5825"} Oct 08 14:45:01 crc kubenswrapper[4624]: I1008 14:45:01.677146 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw" event={"ID":"8f42483b-c604-404c-8921-4157c19584bb","Type":"ContainerStarted","Data":"97dfa05ee642710cd6ac4bceeff85ecfc89a6ad5316110a4e9ab92c411fb4bf1"} Oct 08 14:45:01 crc kubenswrapper[4624]: I1008 14:45:01.699285 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw" podStartSLOduration=1.699264732 podStartE2EDuration="1.699264732s" podCreationTimestamp="2025-10-08 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:45:01.692275166 +0000 UTC m=+1326.843210253" watchObservedRunningTime="2025-10-08 14:45:01.699264732 +0000 UTC m=+1326.850199799" Oct 08 14:45:01 crc kubenswrapper[4624]: I1008 14:45:01.860047 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 08 14:45:01 crc kubenswrapper[4624]: I1008 14:45:01.860530 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 08 14:45:01 crc kubenswrapper[4624]: I1008 14:45:01.860559 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 14:45:01 crc kubenswrapper[4624]: I1008 14:45:01.862901 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.686610 4624 generic.go:334] "Generic (PLEG): container finished" podID="8f42483b-c604-404c-8921-4157c19584bb" containerID="891f491334ccca991cd1a055ee26dd97857de25b8f6d243758140e97dd0f5825" exitCode=0 Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.687990 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw" event={"ID":"8f42483b-c604-404c-8921-4157c19584bb","Type":"ContainerDied","Data":"891f491334ccca991cd1a055ee26dd97857de25b8f6d243758140e97dd0f5825"} Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.690581 4624 generic.go:334] "Generic (PLEG): container finished" podID="4b203558-1aea-4672-871f-d2dca324a585" containerID="da719f698a783f03a9aeca3f8634dbc83f874d6792863ed431947ce50e36881e" exitCode=137 Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.691876 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6cd65c74-7vqb5" event={"ID":"4b203558-1aea-4672-871f-d2dca324a585","Type":"ContainerDied","Data":"da719f698a783f03a9aeca3f8634dbc83f874d6792863ed431947ce50e36881e"} Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.692143 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6cd65c74-7vqb5" event={"ID":"4b203558-1aea-4672-871f-d2dca324a585","Type":"ContainerDied","Data":"25f94ae97b15469d5ded389dc2d8affffb353cd234649fb67ffa2e9b39041e98"} Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.692221 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25f94ae97b15469d5ded389dc2d8affffb353cd234649fb67ffa2e9b39041e98" Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.692300 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.713436 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.717208 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.839241 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4b203558-1aea-4672-871f-d2dca324a585-horizon-secret-key\") pod \"4b203558-1aea-4672-871f-d2dca324a585\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.839788 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b203558-1aea-4672-871f-d2dca324a585-scripts\") pod \"4b203558-1aea-4672-871f-d2dca324a585\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.839851 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b203558-1aea-4672-871f-d2dca324a585-config-data\") pod \"4b203558-1aea-4672-871f-d2dca324a585\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.839965 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b203558-1aea-4672-871f-d2dca324a585-horizon-tls-certs\") pod \"4b203558-1aea-4672-871f-d2dca324a585\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.840038 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddkc4\" (UniqueName: \"kubernetes.io/projected/4b203558-1aea-4672-871f-d2dca324a585-kube-api-access-ddkc4\") pod \"4b203558-1aea-4672-871f-d2dca324a585\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.840093 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b203558-1aea-4672-871f-d2dca324a585-logs\") pod \"4b203558-1aea-4672-871f-d2dca324a585\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.840151 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b203558-1aea-4672-871f-d2dca324a585-combined-ca-bundle\") pod \"4b203558-1aea-4672-871f-d2dca324a585\" (UID: \"4b203558-1aea-4672-871f-d2dca324a585\") " Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.843456 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b203558-1aea-4672-871f-d2dca324a585-logs" (OuterVolumeSpecName: "logs") pod "4b203558-1aea-4672-871f-d2dca324a585" (UID: "4b203558-1aea-4672-871f-d2dca324a585"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.866305 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b203558-1aea-4672-871f-d2dca324a585-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4b203558-1aea-4672-871f-d2dca324a585" (UID: "4b203558-1aea-4672-871f-d2dca324a585"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.868725 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b203558-1aea-4672-871f-d2dca324a585-kube-api-access-ddkc4" (OuterVolumeSpecName: "kube-api-access-ddkc4") pod "4b203558-1aea-4672-871f-d2dca324a585" (UID: "4b203558-1aea-4672-871f-d2dca324a585"). InnerVolumeSpecName "kube-api-access-ddkc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.901794 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b203558-1aea-4672-871f-d2dca324a585-config-data" (OuterVolumeSpecName: "config-data") pod "4b203558-1aea-4672-871f-d2dca324a585" (UID: "4b203558-1aea-4672-871f-d2dca324a585"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.942624 4624 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4b203558-1aea-4672-871f-d2dca324a585-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.942674 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b203558-1aea-4672-871f-d2dca324a585-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.942684 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddkc4\" (UniqueName: \"kubernetes.io/projected/4b203558-1aea-4672-871f-d2dca324a585-kube-api-access-ddkc4\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.942694 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b203558-1aea-4672-871f-d2dca324a585-logs\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.977260 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b203558-1aea-4672-871f-d2dca324a585-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b203558-1aea-4672-871f-d2dca324a585" (UID: "4b203558-1aea-4672-871f-d2dca324a585"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:02 crc kubenswrapper[4624]: I1008 14:45:02.979773 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b203558-1aea-4672-871f-d2dca324a585-scripts" (OuterVolumeSpecName: "scripts") pod "4b203558-1aea-4672-871f-d2dca324a585" (UID: "4b203558-1aea-4672-871f-d2dca324a585"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.046594 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59c4cbc559-vt9mm"] Oct 08 14:45:03 crc kubenswrapper[4624]: E1008 14:45:03.047617 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.047670 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" Oct 08 14:45:03 crc kubenswrapper[4624]: E1008 14:45:03.047685 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon-log" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.047694 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon-log" Oct 08 14:45:03 crc kubenswrapper[4624]: E1008 14:45:03.047774 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.047788 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.050031 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.050060 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.050656 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon-log" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.050711 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" Oct 08 14:45:03 crc kubenswrapper[4624]: E1008 14:45:03.051332 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.051355 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b203558-1aea-4672-871f-d2dca324a585" containerName="horizon" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.064467 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.073801 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b203558-1aea-4672-871f-d2dca324a585-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.074320 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b203558-1aea-4672-871f-d2dca324a585-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.075687 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59c4cbc559-vt9mm"] Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.134927 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b203558-1aea-4672-871f-d2dca324a585-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "4b203558-1aea-4672-871f-d2dca324a585" (UID: "4b203558-1aea-4672-871f-d2dca324a585"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.175869 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-config\") pod \"dnsmasq-dns-59c4cbc559-vt9mm\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.175936 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-ovsdbserver-nb\") pod \"dnsmasq-dns-59c4cbc559-vt9mm\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.176001 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-ovsdbserver-sb\") pod \"dnsmasq-dns-59c4cbc559-vt9mm\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.176035 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-dns-svc\") pod \"dnsmasq-dns-59c4cbc559-vt9mm\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.176111 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqh78\" (UniqueName: \"kubernetes.io/projected/c0519d2a-5bda-421c-88fc-9319f3bc7a29-kube-api-access-tqh78\") pod \"dnsmasq-dns-59c4cbc559-vt9mm\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.176164 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-dns-swift-storage-0\") pod \"dnsmasq-dns-59c4cbc559-vt9mm\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.176281 4624 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b203558-1aea-4672-871f-d2dca324a585-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.277892 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-config\") pod \"dnsmasq-dns-59c4cbc559-vt9mm\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.277980 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-ovsdbserver-nb\") pod \"dnsmasq-dns-59c4cbc559-vt9mm\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.278049 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-ovsdbserver-sb\") pod \"dnsmasq-dns-59c4cbc559-vt9mm\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.278101 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-dns-svc\") pod \"dnsmasq-dns-59c4cbc559-vt9mm\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.279001 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-config\") pod \"dnsmasq-dns-59c4cbc559-vt9mm\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.279170 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-ovsdbserver-sb\") pod \"dnsmasq-dns-59c4cbc559-vt9mm\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.279265 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-ovsdbserver-nb\") pod \"dnsmasq-dns-59c4cbc559-vt9mm\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.279350 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqh78\" (UniqueName: \"kubernetes.io/projected/c0519d2a-5bda-421c-88fc-9319f3bc7a29-kube-api-access-tqh78\") pod \"dnsmasq-dns-59c4cbc559-vt9mm\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.279496 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-dns-svc\") pod \"dnsmasq-dns-59c4cbc559-vt9mm\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.280705 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-dns-swift-storage-0\") pod \"dnsmasq-dns-59c4cbc559-vt9mm\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.280746 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-dns-swift-storage-0\") pod \"dnsmasq-dns-59c4cbc559-vt9mm\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.291088 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.300362 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqh78\" (UniqueName: \"kubernetes.io/projected/c0519d2a-5bda-421c-88fc-9319f3bc7a29-kube-api-access-tqh78\") pod \"dnsmasq-dns-59c4cbc559-vt9mm\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.305774 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.305823 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.420478 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.698478 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6cd65c74-7vqb5" Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.753433 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f6cd65c74-7vqb5"] Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.780984 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f6cd65c74-7vqb5"] Oct 08 14:45:03 crc kubenswrapper[4624]: I1008 14:45:03.961004 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59c4cbc559-vt9mm"] Oct 08 14:45:04 crc kubenswrapper[4624]: I1008 14:45:04.226731 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw" Oct 08 14:45:04 crc kubenswrapper[4624]: I1008 14:45:04.317054 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f42483b-c604-404c-8921-4157c19584bb-config-volume\") pod \"8f42483b-c604-404c-8921-4157c19584bb\" (UID: \"8f42483b-c604-404c-8921-4157c19584bb\") " Oct 08 14:45:04 crc kubenswrapper[4624]: I1008 14:45:04.317150 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f42483b-c604-404c-8921-4157c19584bb-secret-volume\") pod \"8f42483b-c604-404c-8921-4157c19584bb\" (UID: \"8f42483b-c604-404c-8921-4157c19584bb\") " Oct 08 14:45:04 crc kubenswrapper[4624]: I1008 14:45:04.317399 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm825\" (UniqueName: \"kubernetes.io/projected/8f42483b-c604-404c-8921-4157c19584bb-kube-api-access-rm825\") pod \"8f42483b-c604-404c-8921-4157c19584bb\" (UID: \"8f42483b-c604-404c-8921-4157c19584bb\") " Oct 08 14:45:04 crc kubenswrapper[4624]: I1008 14:45:04.317927 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f42483b-c604-404c-8921-4157c19584bb-config-volume" (OuterVolumeSpecName: "config-volume") pod "8f42483b-c604-404c-8921-4157c19584bb" (UID: "8f42483b-c604-404c-8921-4157c19584bb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:45:04 crc kubenswrapper[4624]: I1008 14:45:04.323501 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f42483b-c604-404c-8921-4157c19584bb-kube-api-access-rm825" (OuterVolumeSpecName: "kube-api-access-rm825") pod "8f42483b-c604-404c-8921-4157c19584bb" (UID: "8f42483b-c604-404c-8921-4157c19584bb"). InnerVolumeSpecName "kube-api-access-rm825". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:45:04 crc kubenswrapper[4624]: I1008 14:45:04.324624 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f42483b-c604-404c-8921-4157c19584bb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8f42483b-c604-404c-8921-4157c19584bb" (UID: "8f42483b-c604-404c-8921-4157c19584bb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:04 crc kubenswrapper[4624]: I1008 14:45:04.418483 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f42483b-c604-404c-8921-4157c19584bb-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:04 crc kubenswrapper[4624]: I1008 14:45:04.418521 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm825\" (UniqueName: \"kubernetes.io/projected/8f42483b-c604-404c-8921-4157c19584bb-kube-api-access-rm825\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:04 crc kubenswrapper[4624]: I1008 14:45:04.418534 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f42483b-c604-404c-8921-4157c19584bb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:04 crc kubenswrapper[4624]: I1008 14:45:04.709811 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw" Oct 08 14:45:04 crc kubenswrapper[4624]: I1008 14:45:04.709807 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw" event={"ID":"8f42483b-c604-404c-8921-4157c19584bb","Type":"ContainerDied","Data":"97dfa05ee642710cd6ac4bceeff85ecfc89a6ad5316110a4e9ab92c411fb4bf1"} Oct 08 14:45:04 crc kubenswrapper[4624]: I1008 14:45:04.710237 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97dfa05ee642710cd6ac4bceeff85ecfc89a6ad5316110a4e9ab92c411fb4bf1" Oct 08 14:45:04 crc kubenswrapper[4624]: I1008 14:45:04.711758 4624 generic.go:334] "Generic (PLEG): container finished" podID="c0519d2a-5bda-421c-88fc-9319f3bc7a29" containerID="4a981783a37bb350c851b68660731a09fd0a8c5a1b98d4214e06432b767f2c4f" exitCode=0 Oct 08 14:45:04 crc kubenswrapper[4624]: I1008 14:45:04.711843 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" event={"ID":"c0519d2a-5bda-421c-88fc-9319f3bc7a29","Type":"ContainerDied","Data":"4a981783a37bb350c851b68660731a09fd0a8c5a1b98d4214e06432b767f2c4f"} Oct 08 14:45:04 crc kubenswrapper[4624]: I1008 14:45:04.711888 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" event={"ID":"c0519d2a-5bda-421c-88fc-9319f3bc7a29","Type":"ContainerStarted","Data":"8b1f31d966bd97ade870e03c54ff10b827cb5213dcef104f3ea88611b5d850ee"} Oct 08 14:45:05 crc kubenswrapper[4624]: I1008 14:45:05.367152 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:45:05 crc kubenswrapper[4624]: I1008 14:45:05.367421 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9484ff6c-2ae2-437a-825c-dc4253039982" containerName="ceilometer-central-agent" containerID="cri-o://4e8b7cddc68ee3fca2d7d0e7e51bd54d7f54a0d99ad318639514d9bff53eb263" gracePeriod=30 Oct 08 14:45:05 crc kubenswrapper[4624]: I1008 14:45:05.367548 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9484ff6c-2ae2-437a-825c-dc4253039982" containerName="proxy-httpd" containerID="cri-o://3f4971fb053c838a1ca055851ee20abb1b80dd17076e1ebfb366fa4d21473cf6" gracePeriod=30 Oct 08 14:45:05 crc kubenswrapper[4624]: I1008 14:45:05.367588 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9484ff6c-2ae2-437a-825c-dc4253039982" containerName="sg-core" containerID="cri-o://2495fd34b0213170b89e241d486d40e77ae669909a8993ffbdcff6b5e06cd0af" gracePeriod=30 Oct 08 14:45:05 crc kubenswrapper[4624]: I1008 14:45:05.367617 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9484ff6c-2ae2-437a-825c-dc4253039982" containerName="ceilometer-notification-agent" containerID="cri-o://a0b94ce9d0a50bd01558ab4183b028d4dffb86507f4c9933c73c9fa22e6f4011" gracePeriod=30 Oct 08 14:45:05 crc kubenswrapper[4624]: I1008 14:45:05.480563 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b203558-1aea-4672-871f-d2dca324a585" path="/var/lib/kubelet/pods/4b203558-1aea-4672-871f-d2dca324a585/volumes" Oct 08 14:45:05 crc kubenswrapper[4624]: I1008 14:45:05.727403 4624 generic.go:334] "Generic (PLEG): container finished" podID="9484ff6c-2ae2-437a-825c-dc4253039982" containerID="3f4971fb053c838a1ca055851ee20abb1b80dd17076e1ebfb366fa4d21473cf6" exitCode=0 Oct 08 14:45:05 crc kubenswrapper[4624]: I1008 14:45:05.727749 4624 generic.go:334] "Generic (PLEG): container finished" podID="9484ff6c-2ae2-437a-825c-dc4253039982" containerID="2495fd34b0213170b89e241d486d40e77ae669909a8993ffbdcff6b5e06cd0af" exitCode=2 Oct 08 14:45:05 crc kubenswrapper[4624]: I1008 14:45:05.727562 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9484ff6c-2ae2-437a-825c-dc4253039982","Type":"ContainerDied","Data":"3f4971fb053c838a1ca055851ee20abb1b80dd17076e1ebfb366fa4d21473cf6"} Oct 08 14:45:05 crc kubenswrapper[4624]: I1008 14:45:05.727840 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9484ff6c-2ae2-437a-825c-dc4253039982","Type":"ContainerDied","Data":"2495fd34b0213170b89e241d486d40e77ae669909a8993ffbdcff6b5e06cd0af"} Oct 08 14:45:05 crc kubenswrapper[4624]: I1008 14:45:05.730018 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" event={"ID":"c0519d2a-5bda-421c-88fc-9319f3bc7a29","Type":"ContainerStarted","Data":"6c3cdb2a488aebcb231c0997c619b39aa4b2f0157d93c7b1f214abcdd407d87b"} Oct 08 14:45:05 crc kubenswrapper[4624]: I1008 14:45:05.730829 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:05 crc kubenswrapper[4624]: I1008 14:45:05.756678 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" podStartSLOduration=3.756661632 podStartE2EDuration="3.756661632s" podCreationTimestamp="2025-10-08 14:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:45:05.74866462 +0000 UTC m=+1330.899599717" watchObservedRunningTime="2025-10-08 14:45:05.756661632 +0000 UTC m=+1330.907596709" Oct 08 14:45:05 crc kubenswrapper[4624]: I1008 14:45:05.922848 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 14:45:05 crc kubenswrapper[4624]: I1008 14:45:05.923153 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d8213205-980a-47d7-8940-a4ecef1b3007" containerName="nova-api-log" containerID="cri-o://240ee102fe3cd8eae9e18023c1c5943036d050cec8274dc32ad60722ce99238a" gracePeriod=30 Oct 08 14:45:05 crc kubenswrapper[4624]: I1008 14:45:05.923257 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d8213205-980a-47d7-8940-a4ecef1b3007" containerName="nova-api-api" containerID="cri-o://e5c4a05da9ebb60444033c2bfa31a2f306843880933c1d6c5e680b06047ba351" gracePeriod=30 Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.438037 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.563224 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-combined-ca-bundle\") pod \"9484ff6c-2ae2-437a-825c-dc4253039982\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.563379 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9484ff6c-2ae2-437a-825c-dc4253039982-log-httpd\") pod \"9484ff6c-2ae2-437a-825c-dc4253039982\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.564594 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-config-data\") pod \"9484ff6c-2ae2-437a-825c-dc4253039982\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.564754 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-sg-core-conf-yaml\") pod \"9484ff6c-2ae2-437a-825c-dc4253039982\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.564787 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9484ff6c-2ae2-437a-825c-dc4253039982-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9484ff6c-2ae2-437a-825c-dc4253039982" (UID: "9484ff6c-2ae2-437a-825c-dc4253039982"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.565039 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-scripts\") pod \"9484ff6c-2ae2-437a-825c-dc4253039982\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.565081 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mpnj\" (UniqueName: \"kubernetes.io/projected/9484ff6c-2ae2-437a-825c-dc4253039982-kube-api-access-4mpnj\") pod \"9484ff6c-2ae2-437a-825c-dc4253039982\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.565112 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-ceilometer-tls-certs\") pod \"9484ff6c-2ae2-437a-825c-dc4253039982\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.565141 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9484ff6c-2ae2-437a-825c-dc4253039982-run-httpd\") pod \"9484ff6c-2ae2-437a-825c-dc4253039982\" (UID: \"9484ff6c-2ae2-437a-825c-dc4253039982\") " Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.565911 4624 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9484ff6c-2ae2-437a-825c-dc4253039982-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.567330 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9484ff6c-2ae2-437a-825c-dc4253039982-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9484ff6c-2ae2-437a-825c-dc4253039982" (UID: "9484ff6c-2ae2-437a-825c-dc4253039982"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.574361 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-scripts" (OuterVolumeSpecName: "scripts") pod "9484ff6c-2ae2-437a-825c-dc4253039982" (UID: "9484ff6c-2ae2-437a-825c-dc4253039982"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.576564 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9484ff6c-2ae2-437a-825c-dc4253039982-kube-api-access-4mpnj" (OuterVolumeSpecName: "kube-api-access-4mpnj") pod "9484ff6c-2ae2-437a-825c-dc4253039982" (UID: "9484ff6c-2ae2-437a-825c-dc4253039982"). InnerVolumeSpecName "kube-api-access-4mpnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.648134 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9484ff6c-2ae2-437a-825c-dc4253039982" (UID: "9484ff6c-2ae2-437a-825c-dc4253039982"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.667626 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.667682 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mpnj\" (UniqueName: \"kubernetes.io/projected/9484ff6c-2ae2-437a-825c-dc4253039982-kube-api-access-4mpnj\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.667699 4624 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9484ff6c-2ae2-437a-825c-dc4253039982-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.667715 4624 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.677599 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9484ff6c-2ae2-437a-825c-dc4253039982" (UID: "9484ff6c-2ae2-437a-825c-dc4253039982"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.691555 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9484ff6c-2ae2-437a-825c-dc4253039982" (UID: "9484ff6c-2ae2-437a-825c-dc4253039982"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.752082 4624 generic.go:334] "Generic (PLEG): container finished" podID="d8213205-980a-47d7-8940-a4ecef1b3007" containerID="240ee102fe3cd8eae9e18023c1c5943036d050cec8274dc32ad60722ce99238a" exitCode=143 Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.752186 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8213205-980a-47d7-8940-a4ecef1b3007","Type":"ContainerDied","Data":"240ee102fe3cd8eae9e18023c1c5943036d050cec8274dc32ad60722ce99238a"} Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.755418 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-config-data" (OuterVolumeSpecName: "config-data") pod "9484ff6c-2ae2-437a-825c-dc4253039982" (UID: "9484ff6c-2ae2-437a-825c-dc4253039982"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.755992 4624 generic.go:334] "Generic (PLEG): container finished" podID="9484ff6c-2ae2-437a-825c-dc4253039982" containerID="a0b94ce9d0a50bd01558ab4183b028d4dffb86507f4c9933c73c9fa22e6f4011" exitCode=0 Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.756024 4624 generic.go:334] "Generic (PLEG): container finished" podID="9484ff6c-2ae2-437a-825c-dc4253039982" containerID="4e8b7cddc68ee3fca2d7d0e7e51bd54d7f54a0d99ad318639514d9bff53eb263" exitCode=0 Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.756258 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9484ff6c-2ae2-437a-825c-dc4253039982","Type":"ContainerDied","Data":"a0b94ce9d0a50bd01558ab4183b028d4dffb86507f4c9933c73c9fa22e6f4011"} Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.756312 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9484ff6c-2ae2-437a-825c-dc4253039982","Type":"ContainerDied","Data":"4e8b7cddc68ee3fca2d7d0e7e51bd54d7f54a0d99ad318639514d9bff53eb263"} Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.756329 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9484ff6c-2ae2-437a-825c-dc4253039982","Type":"ContainerDied","Data":"14d4a0efbc96a958ac23d415afd01c9b7eea35e2d945a4d82fcb215b335d8ccb"} Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.756346 4624 scope.go:117] "RemoveContainer" containerID="3f4971fb053c838a1ca055851ee20abb1b80dd17076e1ebfb366fa4d21473cf6" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.756543 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.772623 4624 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.772707 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.772722 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9484ff6c-2ae2-437a-825c-dc4253039982-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.852932 4624 scope.go:117] "RemoveContainer" containerID="2495fd34b0213170b89e241d486d40e77ae669909a8993ffbdcff6b5e06cd0af" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.882815 4624 scope.go:117] "RemoveContainer" containerID="a0b94ce9d0a50bd01558ab4183b028d4dffb86507f4c9933c73c9fa22e6f4011" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.884598 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.897008 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.906782 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:45:06 crc kubenswrapper[4624]: E1008 14:45:06.907342 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f42483b-c604-404c-8921-4157c19584bb" containerName="collect-profiles" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.907362 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f42483b-c604-404c-8921-4157c19584bb" containerName="collect-profiles" Oct 08 14:45:06 crc kubenswrapper[4624]: E1008 14:45:06.907383 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9484ff6c-2ae2-437a-825c-dc4253039982" containerName="proxy-httpd" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.907389 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9484ff6c-2ae2-437a-825c-dc4253039982" containerName="proxy-httpd" Oct 08 14:45:06 crc kubenswrapper[4624]: E1008 14:45:06.907405 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9484ff6c-2ae2-437a-825c-dc4253039982" containerName="sg-core" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.907412 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9484ff6c-2ae2-437a-825c-dc4253039982" containerName="sg-core" Oct 08 14:45:06 crc kubenswrapper[4624]: E1008 14:45:06.907432 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9484ff6c-2ae2-437a-825c-dc4253039982" containerName="ceilometer-notification-agent" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.907438 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9484ff6c-2ae2-437a-825c-dc4253039982" containerName="ceilometer-notification-agent" Oct 08 14:45:06 crc kubenswrapper[4624]: E1008 14:45:06.907451 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9484ff6c-2ae2-437a-825c-dc4253039982" containerName="ceilometer-central-agent" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.907457 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9484ff6c-2ae2-437a-825c-dc4253039982" containerName="ceilometer-central-agent" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.907654 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9484ff6c-2ae2-437a-825c-dc4253039982" containerName="ceilometer-central-agent" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.907668 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9484ff6c-2ae2-437a-825c-dc4253039982" containerName="proxy-httpd" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.907688 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f42483b-c604-404c-8921-4157c19584bb" containerName="collect-profiles" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.907702 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9484ff6c-2ae2-437a-825c-dc4253039982" containerName="sg-core" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.907715 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9484ff6c-2ae2-437a-825c-dc4253039982" containerName="ceilometer-notification-agent" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.910380 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.911871 4624 scope.go:117] "RemoveContainer" containerID="4e8b7cddc68ee3fca2d7d0e7e51bd54d7f54a0d99ad318639514d9bff53eb263" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.912460 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.915156 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.916258 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.942759 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.964894 4624 scope.go:117] "RemoveContainer" containerID="3f4971fb053c838a1ca055851ee20abb1b80dd17076e1ebfb366fa4d21473cf6" Oct 08 14:45:06 crc kubenswrapper[4624]: E1008 14:45:06.965365 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f4971fb053c838a1ca055851ee20abb1b80dd17076e1ebfb366fa4d21473cf6\": container with ID starting with 3f4971fb053c838a1ca055851ee20abb1b80dd17076e1ebfb366fa4d21473cf6 not found: ID does not exist" containerID="3f4971fb053c838a1ca055851ee20abb1b80dd17076e1ebfb366fa4d21473cf6" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.965402 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f4971fb053c838a1ca055851ee20abb1b80dd17076e1ebfb366fa4d21473cf6"} err="failed to get container status \"3f4971fb053c838a1ca055851ee20abb1b80dd17076e1ebfb366fa4d21473cf6\": rpc error: code = NotFound desc = could not find container \"3f4971fb053c838a1ca055851ee20abb1b80dd17076e1ebfb366fa4d21473cf6\": container with ID starting with 3f4971fb053c838a1ca055851ee20abb1b80dd17076e1ebfb366fa4d21473cf6 not found: ID does not exist" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.965425 4624 scope.go:117] "RemoveContainer" containerID="2495fd34b0213170b89e241d486d40e77ae669909a8993ffbdcff6b5e06cd0af" Oct 08 14:45:06 crc kubenswrapper[4624]: E1008 14:45:06.965885 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2495fd34b0213170b89e241d486d40e77ae669909a8993ffbdcff6b5e06cd0af\": container with ID starting with 2495fd34b0213170b89e241d486d40e77ae669909a8993ffbdcff6b5e06cd0af not found: ID does not exist" containerID="2495fd34b0213170b89e241d486d40e77ae669909a8993ffbdcff6b5e06cd0af" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.965955 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2495fd34b0213170b89e241d486d40e77ae669909a8993ffbdcff6b5e06cd0af"} err="failed to get container status \"2495fd34b0213170b89e241d486d40e77ae669909a8993ffbdcff6b5e06cd0af\": rpc error: code = NotFound desc = could not find container \"2495fd34b0213170b89e241d486d40e77ae669909a8993ffbdcff6b5e06cd0af\": container with ID starting with 2495fd34b0213170b89e241d486d40e77ae669909a8993ffbdcff6b5e06cd0af not found: ID does not exist" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.965989 4624 scope.go:117] "RemoveContainer" containerID="a0b94ce9d0a50bd01558ab4183b028d4dffb86507f4c9933c73c9fa22e6f4011" Oct 08 14:45:06 crc kubenswrapper[4624]: E1008 14:45:06.966328 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0b94ce9d0a50bd01558ab4183b028d4dffb86507f4c9933c73c9fa22e6f4011\": container with ID starting with a0b94ce9d0a50bd01558ab4183b028d4dffb86507f4c9933c73c9fa22e6f4011 not found: ID does not exist" containerID="a0b94ce9d0a50bd01558ab4183b028d4dffb86507f4c9933c73c9fa22e6f4011" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.966361 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0b94ce9d0a50bd01558ab4183b028d4dffb86507f4c9933c73c9fa22e6f4011"} err="failed to get container status \"a0b94ce9d0a50bd01558ab4183b028d4dffb86507f4c9933c73c9fa22e6f4011\": rpc error: code = NotFound desc = could not find container \"a0b94ce9d0a50bd01558ab4183b028d4dffb86507f4c9933c73c9fa22e6f4011\": container with ID starting with a0b94ce9d0a50bd01558ab4183b028d4dffb86507f4c9933c73c9fa22e6f4011 not found: ID does not exist" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.966380 4624 scope.go:117] "RemoveContainer" containerID="4e8b7cddc68ee3fca2d7d0e7e51bd54d7f54a0d99ad318639514d9bff53eb263" Oct 08 14:45:06 crc kubenswrapper[4624]: E1008 14:45:06.966880 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e8b7cddc68ee3fca2d7d0e7e51bd54d7f54a0d99ad318639514d9bff53eb263\": container with ID starting with 4e8b7cddc68ee3fca2d7d0e7e51bd54d7f54a0d99ad318639514d9bff53eb263 not found: ID does not exist" containerID="4e8b7cddc68ee3fca2d7d0e7e51bd54d7f54a0d99ad318639514d9bff53eb263" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.966911 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e8b7cddc68ee3fca2d7d0e7e51bd54d7f54a0d99ad318639514d9bff53eb263"} err="failed to get container status \"4e8b7cddc68ee3fca2d7d0e7e51bd54d7f54a0d99ad318639514d9bff53eb263\": rpc error: code = NotFound desc = could not find container \"4e8b7cddc68ee3fca2d7d0e7e51bd54d7f54a0d99ad318639514d9bff53eb263\": container with ID starting with 4e8b7cddc68ee3fca2d7d0e7e51bd54d7f54a0d99ad318639514d9bff53eb263 not found: ID does not exist" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.966930 4624 scope.go:117] "RemoveContainer" containerID="3f4971fb053c838a1ca055851ee20abb1b80dd17076e1ebfb366fa4d21473cf6" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.967190 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f4971fb053c838a1ca055851ee20abb1b80dd17076e1ebfb366fa4d21473cf6"} err="failed to get container status \"3f4971fb053c838a1ca055851ee20abb1b80dd17076e1ebfb366fa4d21473cf6\": rpc error: code = NotFound desc = could not find container \"3f4971fb053c838a1ca055851ee20abb1b80dd17076e1ebfb366fa4d21473cf6\": container with ID starting with 3f4971fb053c838a1ca055851ee20abb1b80dd17076e1ebfb366fa4d21473cf6 not found: ID does not exist" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.967215 4624 scope.go:117] "RemoveContainer" containerID="2495fd34b0213170b89e241d486d40e77ae669909a8993ffbdcff6b5e06cd0af" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.967389 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2495fd34b0213170b89e241d486d40e77ae669909a8993ffbdcff6b5e06cd0af"} err="failed to get container status \"2495fd34b0213170b89e241d486d40e77ae669909a8993ffbdcff6b5e06cd0af\": rpc error: code = NotFound desc = could not find container \"2495fd34b0213170b89e241d486d40e77ae669909a8993ffbdcff6b5e06cd0af\": container with ID starting with 2495fd34b0213170b89e241d486d40e77ae669909a8993ffbdcff6b5e06cd0af not found: ID does not exist" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.967416 4624 scope.go:117] "RemoveContainer" containerID="a0b94ce9d0a50bd01558ab4183b028d4dffb86507f4c9933c73c9fa22e6f4011" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.967657 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0b94ce9d0a50bd01558ab4183b028d4dffb86507f4c9933c73c9fa22e6f4011"} err="failed to get container status \"a0b94ce9d0a50bd01558ab4183b028d4dffb86507f4c9933c73c9fa22e6f4011\": rpc error: code = NotFound desc = could not find container \"a0b94ce9d0a50bd01558ab4183b028d4dffb86507f4c9933c73c9fa22e6f4011\": container with ID starting with a0b94ce9d0a50bd01558ab4183b028d4dffb86507f4c9933c73c9fa22e6f4011 not found: ID does not exist" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.967683 4624 scope.go:117] "RemoveContainer" containerID="4e8b7cddc68ee3fca2d7d0e7e51bd54d7f54a0d99ad318639514d9bff53eb263" Oct 08 14:45:06 crc kubenswrapper[4624]: I1008 14:45:06.967932 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e8b7cddc68ee3fca2d7d0e7e51bd54d7f54a0d99ad318639514d9bff53eb263"} err="failed to get container status \"4e8b7cddc68ee3fca2d7d0e7e51bd54d7f54a0d99ad318639514d9bff53eb263\": rpc error: code = NotFound desc = could not find container \"4e8b7cddc68ee3fca2d7d0e7e51bd54d7f54a0d99ad318639514d9bff53eb263\": container with ID starting with 4e8b7cddc68ee3fca2d7d0e7e51bd54d7f54a0d99ad318639514d9bff53eb263 not found: ID does not exist" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.081983 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-scripts\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.082094 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5ff15c9-ddbc-409e-b605-5e453914798f-log-httpd\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.082131 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.082250 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5ff15c9-ddbc-409e-b605-5e453914798f-run-httpd\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.082351 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.082390 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.082435 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-config-data\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.082537 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm24p\" (UniqueName: \"kubernetes.io/projected/f5ff15c9-ddbc-409e-b605-5e453914798f-kube-api-access-nm24p\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.184186 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5ff15c9-ddbc-409e-b605-5e453914798f-run-httpd\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.184278 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.184313 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.184337 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-config-data\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.184365 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm24p\" (UniqueName: \"kubernetes.io/projected/f5ff15c9-ddbc-409e-b605-5e453914798f-kube-api-access-nm24p\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.184405 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-scripts\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.184444 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5ff15c9-ddbc-409e-b605-5e453914798f-log-httpd\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.184465 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.185649 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5ff15c9-ddbc-409e-b605-5e453914798f-log-httpd\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.185790 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5ff15c9-ddbc-409e-b605-5e453914798f-run-httpd\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.190956 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-scripts\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.191192 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.191630 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-config-data\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.192875 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.193278 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.205346 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm24p\" (UniqueName: \"kubernetes.io/projected/f5ff15c9-ddbc-409e-b605-5e453914798f-kube-api-access-nm24p\") pod \"ceilometer-0\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " pod="openstack/ceilometer-0" Oct 08 14:45:07 crc kubenswrapper[4624]: I1008 14:45:07.233006 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:45:08 crc kubenswrapper[4624]: I1008 14:45:07.481387 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9484ff6c-2ae2-437a-825c-dc4253039982" path="/var/lib/kubelet/pods/9484ff6c-2ae2-437a-825c-dc4253039982/volumes" Oct 08 14:45:08 crc kubenswrapper[4624]: I1008 14:45:07.991591 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:45:08 crc kubenswrapper[4624]: I1008 14:45:08.291276 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:45:08 crc kubenswrapper[4624]: I1008 14:45:08.304744 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 08 14:45:08 crc kubenswrapper[4624]: I1008 14:45:08.304831 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 08 14:45:08 crc kubenswrapper[4624]: I1008 14:45:08.316914 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:45:08 crc kubenswrapper[4624]: I1008 14:45:08.393238 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:45:08 crc kubenswrapper[4624]: I1008 14:45:08.797551 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5ff15c9-ddbc-409e-b605-5e453914798f","Type":"ContainerStarted","Data":"950afcb496a4929e32d8027f33e152661f94ab1988e00805c73d58bd9cf9cb93"} Oct 08 14:45:08 crc kubenswrapper[4624]: I1008 14:45:08.797831 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5ff15c9-ddbc-409e-b605-5e453914798f","Type":"ContainerStarted","Data":"253bb22101ddc314ec7911970daa129ca94a58fee276a9627c13ff2faa69a8e7"} Oct 08 14:45:08 crc kubenswrapper[4624]: I1008 14:45:08.839954 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.083088 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-r2jkt"] Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.085940 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-r2jkt" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.088029 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.088444 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.112018 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-r2jkt"] Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.154947 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e373c0bb-06d9-45b9-ac73-eb55282374f4-config-data\") pod \"nova-cell1-cell-mapping-r2jkt\" (UID: \"e373c0bb-06d9-45b9-ac73-eb55282374f4\") " pod="openstack/nova-cell1-cell-mapping-r2jkt" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.155079 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e373c0bb-06d9-45b9-ac73-eb55282374f4-scripts\") pod \"nova-cell1-cell-mapping-r2jkt\" (UID: \"e373c0bb-06d9-45b9-ac73-eb55282374f4\") " pod="openstack/nova-cell1-cell-mapping-r2jkt" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.155225 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e373c0bb-06d9-45b9-ac73-eb55282374f4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-r2jkt\" (UID: \"e373c0bb-06d9-45b9-ac73-eb55282374f4\") " pod="openstack/nova-cell1-cell-mapping-r2jkt" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.155313 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqf7h\" (UniqueName: \"kubernetes.io/projected/e373c0bb-06d9-45b9-ac73-eb55282374f4-kube-api-access-tqf7h\") pod \"nova-cell1-cell-mapping-r2jkt\" (UID: \"e373c0bb-06d9-45b9-ac73-eb55282374f4\") " pod="openstack/nova-cell1-cell-mapping-r2jkt" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.256341 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqf7h\" (UniqueName: \"kubernetes.io/projected/e373c0bb-06d9-45b9-ac73-eb55282374f4-kube-api-access-tqf7h\") pod \"nova-cell1-cell-mapping-r2jkt\" (UID: \"e373c0bb-06d9-45b9-ac73-eb55282374f4\") " pod="openstack/nova-cell1-cell-mapping-r2jkt" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.256399 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e373c0bb-06d9-45b9-ac73-eb55282374f4-config-data\") pod \"nova-cell1-cell-mapping-r2jkt\" (UID: \"e373c0bb-06d9-45b9-ac73-eb55282374f4\") " pod="openstack/nova-cell1-cell-mapping-r2jkt" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.256450 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e373c0bb-06d9-45b9-ac73-eb55282374f4-scripts\") pod \"nova-cell1-cell-mapping-r2jkt\" (UID: \"e373c0bb-06d9-45b9-ac73-eb55282374f4\") " pod="openstack/nova-cell1-cell-mapping-r2jkt" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.256523 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e373c0bb-06d9-45b9-ac73-eb55282374f4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-r2jkt\" (UID: \"e373c0bb-06d9-45b9-ac73-eb55282374f4\") " pod="openstack/nova-cell1-cell-mapping-r2jkt" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.261485 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e373c0bb-06d9-45b9-ac73-eb55282374f4-config-data\") pod \"nova-cell1-cell-mapping-r2jkt\" (UID: \"e373c0bb-06d9-45b9-ac73-eb55282374f4\") " pod="openstack/nova-cell1-cell-mapping-r2jkt" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.265914 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e373c0bb-06d9-45b9-ac73-eb55282374f4-scripts\") pod \"nova-cell1-cell-mapping-r2jkt\" (UID: \"e373c0bb-06d9-45b9-ac73-eb55282374f4\") " pod="openstack/nova-cell1-cell-mapping-r2jkt" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.276290 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e373c0bb-06d9-45b9-ac73-eb55282374f4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-r2jkt\" (UID: \"e373c0bb-06d9-45b9-ac73-eb55282374f4\") " pod="openstack/nova-cell1-cell-mapping-r2jkt" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.282226 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqf7h\" (UniqueName: \"kubernetes.io/projected/e373c0bb-06d9-45b9-ac73-eb55282374f4-kube-api-access-tqf7h\") pod \"nova-cell1-cell-mapping-r2jkt\" (UID: \"e373c0bb-06d9-45b9-ac73-eb55282374f4\") " pod="openstack/nova-cell1-cell-mapping-r2jkt" Oct 08 14:45:09 crc kubenswrapper[4624]: E1008 14:45:09.287176 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8213205_980a_47d7_8940_a4ecef1b3007.slice/crio-conmon-e5c4a05da9ebb60444033c2bfa31a2f306843880933c1d6c5e680b06047ba351.scope\": RecentStats: unable to find data in memory cache]" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.304866 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="025409f2-bc81-4fb0-880c-8bb405abfdc4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.311999 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="025409f2-bc81-4fb0-880c-8bb405abfdc4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.515190 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-r2jkt" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.606851 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.772416 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg4t6\" (UniqueName: \"kubernetes.io/projected/d8213205-980a-47d7-8940-a4ecef1b3007-kube-api-access-pg4t6\") pod \"d8213205-980a-47d7-8940-a4ecef1b3007\" (UID: \"d8213205-980a-47d7-8940-a4ecef1b3007\") " Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.772519 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8213205-980a-47d7-8940-a4ecef1b3007-logs\") pod \"d8213205-980a-47d7-8940-a4ecef1b3007\" (UID: \"d8213205-980a-47d7-8940-a4ecef1b3007\") " Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.772693 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8213205-980a-47d7-8940-a4ecef1b3007-combined-ca-bundle\") pod \"d8213205-980a-47d7-8940-a4ecef1b3007\" (UID: \"d8213205-980a-47d7-8940-a4ecef1b3007\") " Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.772725 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8213205-980a-47d7-8940-a4ecef1b3007-config-data\") pod \"d8213205-980a-47d7-8940-a4ecef1b3007\" (UID: \"d8213205-980a-47d7-8940-a4ecef1b3007\") " Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.791394 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8213205-980a-47d7-8940-a4ecef1b3007-logs" (OuterVolumeSpecName: "logs") pod "d8213205-980a-47d7-8940-a4ecef1b3007" (UID: "d8213205-980a-47d7-8940-a4ecef1b3007"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.810711 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8213205-980a-47d7-8940-a4ecef1b3007-kube-api-access-pg4t6" (OuterVolumeSpecName: "kube-api-access-pg4t6") pod "d8213205-980a-47d7-8940-a4ecef1b3007" (UID: "d8213205-980a-47d7-8940-a4ecef1b3007"). InnerVolumeSpecName "kube-api-access-pg4t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.876021 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg4t6\" (UniqueName: \"kubernetes.io/projected/d8213205-980a-47d7-8940-a4ecef1b3007-kube-api-access-pg4t6\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.876280 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8213205-980a-47d7-8940-a4ecef1b3007-logs\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.882832 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8213205-980a-47d7-8940-a4ecef1b3007-config-data" (OuterVolumeSpecName: "config-data") pod "d8213205-980a-47d7-8940-a4ecef1b3007" (UID: "d8213205-980a-47d7-8940-a4ecef1b3007"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.889237 4624 generic.go:334] "Generic (PLEG): container finished" podID="d8213205-980a-47d7-8940-a4ecef1b3007" containerID="e5c4a05da9ebb60444033c2bfa31a2f306843880933c1d6c5e680b06047ba351" exitCode=0 Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.889408 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.890419 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8213205-980a-47d7-8940-a4ecef1b3007","Type":"ContainerDied","Data":"e5c4a05da9ebb60444033c2bfa31a2f306843880933c1d6c5e680b06047ba351"} Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.890453 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8213205-980a-47d7-8940-a4ecef1b3007","Type":"ContainerDied","Data":"d83cb888ad25dc2a9683e1db4cecd2fb17cf824546d6d3bf2a91e3653cc90732"} Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.890492 4624 scope.go:117] "RemoveContainer" containerID="e5c4a05da9ebb60444033c2bfa31a2f306843880933c1d6c5e680b06047ba351" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.897833 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8213205-980a-47d7-8940-a4ecef1b3007-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8213205-980a-47d7-8940-a4ecef1b3007" (UID: "d8213205-980a-47d7-8940-a4ecef1b3007"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.906547 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5ff15c9-ddbc-409e-b605-5e453914798f","Type":"ContainerStarted","Data":"d74195a84662b63193da4b8d0b1d6488582263fd09565c6b09b6b8f909adb530"} Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.979060 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8213205-980a-47d7-8940-a4ecef1b3007-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:09 crc kubenswrapper[4624]: I1008 14:45:09.979098 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8213205-980a-47d7-8940-a4ecef1b3007-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.103903 4624 scope.go:117] "RemoveContainer" containerID="240ee102fe3cd8eae9e18023c1c5943036d050cec8274dc32ad60722ce99238a" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.209854 4624 scope.go:117] "RemoveContainer" containerID="e5c4a05da9ebb60444033c2bfa31a2f306843880933c1d6c5e680b06047ba351" Oct 08 14:45:10 crc kubenswrapper[4624]: E1008 14:45:10.219870 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5c4a05da9ebb60444033c2bfa31a2f306843880933c1d6c5e680b06047ba351\": container with ID starting with e5c4a05da9ebb60444033c2bfa31a2f306843880933c1d6c5e680b06047ba351 not found: ID does not exist" containerID="e5c4a05da9ebb60444033c2bfa31a2f306843880933c1d6c5e680b06047ba351" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.219913 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5c4a05da9ebb60444033c2bfa31a2f306843880933c1d6c5e680b06047ba351"} err="failed to get container status \"e5c4a05da9ebb60444033c2bfa31a2f306843880933c1d6c5e680b06047ba351\": rpc error: code = NotFound desc = could not find container \"e5c4a05da9ebb60444033c2bfa31a2f306843880933c1d6c5e680b06047ba351\": container with ID starting with e5c4a05da9ebb60444033c2bfa31a2f306843880933c1d6c5e680b06047ba351 not found: ID does not exist" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.219939 4624 scope.go:117] "RemoveContainer" containerID="240ee102fe3cd8eae9e18023c1c5943036d050cec8274dc32ad60722ce99238a" Oct 08 14:45:10 crc kubenswrapper[4624]: E1008 14:45:10.226053 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"240ee102fe3cd8eae9e18023c1c5943036d050cec8274dc32ad60722ce99238a\": container with ID starting with 240ee102fe3cd8eae9e18023c1c5943036d050cec8274dc32ad60722ce99238a not found: ID does not exist" containerID="240ee102fe3cd8eae9e18023c1c5943036d050cec8274dc32ad60722ce99238a" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.226104 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"240ee102fe3cd8eae9e18023c1c5943036d050cec8274dc32ad60722ce99238a"} err="failed to get container status \"240ee102fe3cd8eae9e18023c1c5943036d050cec8274dc32ad60722ce99238a\": rpc error: code = NotFound desc = could not find container \"240ee102fe3cd8eae9e18023c1c5943036d050cec8274dc32ad60722ce99238a\": container with ID starting with 240ee102fe3cd8eae9e18023c1c5943036d050cec8274dc32ad60722ce99238a not found: ID does not exist" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.281803 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.313693 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.331768 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 08 14:45:10 crc kubenswrapper[4624]: E1008 14:45:10.332421 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8213205-980a-47d7-8940-a4ecef1b3007" containerName="nova-api-api" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.332437 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8213205-980a-47d7-8940-a4ecef1b3007" containerName="nova-api-api" Oct 08 14:45:10 crc kubenswrapper[4624]: E1008 14:45:10.332454 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8213205-980a-47d7-8940-a4ecef1b3007" containerName="nova-api-log" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.332460 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8213205-980a-47d7-8940-a4ecef1b3007" containerName="nova-api-log" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.332659 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8213205-980a-47d7-8940-a4ecef1b3007" containerName="nova-api-api" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.332677 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8213205-980a-47d7-8940-a4ecef1b3007" containerName="nova-api-log" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.333817 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.342933 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.343171 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.346475 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.375693 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.401693 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-r2jkt"] Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.498617 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-public-tls-certs\") pod \"nova-api-0\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.498910 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.499151 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-config-data\") pod \"nova-api-0\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.499267 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.499378 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgsh5\" (UniqueName: \"kubernetes.io/projected/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-kube-api-access-sgsh5\") pod \"nova-api-0\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.499515 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-logs\") pod \"nova-api-0\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.601694 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.601833 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-config-data\") pod \"nova-api-0\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.601860 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.601890 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgsh5\" (UniqueName: \"kubernetes.io/projected/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-kube-api-access-sgsh5\") pod \"nova-api-0\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.601928 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-logs\") pod \"nova-api-0\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.602008 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-public-tls-certs\") pod \"nova-api-0\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.606968 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-logs\") pod \"nova-api-0\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.607878 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-config-data\") pod \"nova-api-0\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.608242 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.610941 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-public-tls-certs\") pod \"nova-api-0\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.611397 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.619015 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgsh5\" (UniqueName: \"kubernetes.io/projected/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-kube-api-access-sgsh5\") pod \"nova-api-0\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.683474 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.924351 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-r2jkt" event={"ID":"e373c0bb-06d9-45b9-ac73-eb55282374f4","Type":"ContainerStarted","Data":"d32422146bc184c711eeaf0da36e9227381abf110459af28f1479dc7033e276a"} Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.924726 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-r2jkt" event={"ID":"e373c0bb-06d9-45b9-ac73-eb55282374f4","Type":"ContainerStarted","Data":"87597e8e285263d493f1405b4ad5a6141fb67d731d89c18de413fb92074b98a3"} Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.938925 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5ff15c9-ddbc-409e-b605-5e453914798f","Type":"ContainerStarted","Data":"27e97760b5d94789861ab68a09394da323fbfe3fc45df52aee39f9b7e8453975"} Oct 08 14:45:10 crc kubenswrapper[4624]: I1008 14:45:10.943884 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-r2jkt" podStartSLOduration=1.9438665400000001 podStartE2EDuration="1.94386654s" podCreationTimestamp="2025-10-08 14:45:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:45:10.94305046 +0000 UTC m=+1336.093985557" watchObservedRunningTime="2025-10-08 14:45:10.94386654 +0000 UTC m=+1336.094801617" Oct 08 14:45:11 crc kubenswrapper[4624]: I1008 14:45:11.339910 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 14:45:11 crc kubenswrapper[4624]: W1008 14:45:11.366273 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33fd0970_dd4e_4ec5_bbc6_f0910db1354d.slice/crio-1fd2821fad0e64f6ab7076790f2ecbe363e348d6a007a85f964887b8db4b9d6a WatchSource:0}: Error finding container 1fd2821fad0e64f6ab7076790f2ecbe363e348d6a007a85f964887b8db4b9d6a: Status 404 returned error can't find the container with id 1fd2821fad0e64f6ab7076790f2ecbe363e348d6a007a85f964887b8db4b9d6a Oct 08 14:45:11 crc kubenswrapper[4624]: I1008 14:45:11.477852 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8213205-980a-47d7-8940-a4ecef1b3007" path="/var/lib/kubelet/pods/d8213205-980a-47d7-8940-a4ecef1b3007/volumes" Oct 08 14:45:11 crc kubenswrapper[4624]: I1008 14:45:11.952403 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"33fd0970-dd4e-4ec5-bbc6-f0910db1354d","Type":"ContainerStarted","Data":"2a3a0e49a147803ca5ceedfaecb9a4fa07db0c812656bd014014c9104435a05c"} Oct 08 14:45:11 crc kubenswrapper[4624]: I1008 14:45:11.952721 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"33fd0970-dd4e-4ec5-bbc6-f0910db1354d","Type":"ContainerStarted","Data":"64b0ce0e0507cacd113ec96fe6c7f84c2f175cf7c669dc5f3d0acff623d1d590"} Oct 08 14:45:11 crc kubenswrapper[4624]: I1008 14:45:11.952733 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"33fd0970-dd4e-4ec5-bbc6-f0910db1354d","Type":"ContainerStarted","Data":"1fd2821fad0e64f6ab7076790f2ecbe363e348d6a007a85f964887b8db4b9d6a"} Oct 08 14:45:11 crc kubenswrapper[4624]: I1008 14:45:11.965207 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerName="ceilometer-central-agent" containerID="cri-o://950afcb496a4929e32d8027f33e152661f94ab1988e00805c73d58bd9cf9cb93" gracePeriod=30 Oct 08 14:45:11 crc kubenswrapper[4624]: I1008 14:45:11.965521 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5ff15c9-ddbc-409e-b605-5e453914798f","Type":"ContainerStarted","Data":"c58264e3859f51bc0ec389406c1bbbfcabe52e61cbf18b29d12e3d27aedd2a4c"} Oct 08 14:45:11 crc kubenswrapper[4624]: I1008 14:45:11.965718 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 14:45:11 crc kubenswrapper[4624]: I1008 14:45:11.965728 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerName="proxy-httpd" containerID="cri-o://c58264e3859f51bc0ec389406c1bbbfcabe52e61cbf18b29d12e3d27aedd2a4c" gracePeriod=30 Oct 08 14:45:11 crc kubenswrapper[4624]: I1008 14:45:11.965746 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerName="ceilometer-notification-agent" containerID="cri-o://d74195a84662b63193da4b8d0b1d6488582263fd09565c6b09b6b8f909adb530" gracePeriod=30 Oct 08 14:45:11 crc kubenswrapper[4624]: I1008 14:45:11.965746 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerName="sg-core" containerID="cri-o://27e97760b5d94789861ab68a09394da323fbfe3fc45df52aee39f9b7e8453975" gracePeriod=30 Oct 08 14:45:11 crc kubenswrapper[4624]: I1008 14:45:11.984599 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.984558152 podStartE2EDuration="1.984558152s" podCreationTimestamp="2025-10-08 14:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:45:11.968864317 +0000 UTC m=+1337.119799394" watchObservedRunningTime="2025-10-08 14:45:11.984558152 +0000 UTC m=+1337.135493229" Oct 08 14:45:12 crc kubenswrapper[4624]: I1008 14:45:12.034950 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.902571548 podStartE2EDuration="6.034906331s" podCreationTimestamp="2025-10-08 14:45:06 +0000 UTC" firstStartedPulling="2025-10-08 14:45:08.403035317 +0000 UTC m=+1333.553970384" lastFinishedPulling="2025-10-08 14:45:11.53537009 +0000 UTC m=+1336.686305167" observedRunningTime="2025-10-08 14:45:12.006010173 +0000 UTC m=+1337.156945270" watchObservedRunningTime="2025-10-08 14:45:12.034906331 +0000 UTC m=+1337.185841408" Oct 08 14:45:12 crc kubenswrapper[4624]: I1008 14:45:12.982027 4624 generic.go:334] "Generic (PLEG): container finished" podID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerID="27e97760b5d94789861ab68a09394da323fbfe3fc45df52aee39f9b7e8453975" exitCode=2 Oct 08 14:45:12 crc kubenswrapper[4624]: I1008 14:45:12.982380 4624 generic.go:334] "Generic (PLEG): container finished" podID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerID="d74195a84662b63193da4b8d0b1d6488582263fd09565c6b09b6b8f909adb530" exitCode=0 Oct 08 14:45:12 crc kubenswrapper[4624]: I1008 14:45:12.982079 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5ff15c9-ddbc-409e-b605-5e453914798f","Type":"ContainerDied","Data":"27e97760b5d94789861ab68a09394da323fbfe3fc45df52aee39f9b7e8453975"} Oct 08 14:45:12 crc kubenswrapper[4624]: I1008 14:45:12.982489 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5ff15c9-ddbc-409e-b605-5e453914798f","Type":"ContainerDied","Data":"d74195a84662b63193da4b8d0b1d6488582263fd09565c6b09b6b8f909adb530"} Oct 08 14:45:13 crc kubenswrapper[4624]: I1008 14:45:13.424123 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:45:13 crc kubenswrapper[4624]: I1008 14:45:13.515197 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b7bb65955-qb2vf"] Oct 08 14:45:13 crc kubenswrapper[4624]: I1008 14:45:13.515787 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" podUID="300a4df0-470a-4677-ba91-c9676e430849" containerName="dnsmasq-dns" containerID="cri-o://e68036154b1488ba6487d9deb2209f8c5693b4a6080fcccfb37c64b5ac8a408c" gracePeriod=10 Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.001498 4624 generic.go:334] "Generic (PLEG): container finished" podID="300a4df0-470a-4677-ba91-c9676e430849" containerID="e68036154b1488ba6487d9deb2209f8c5693b4a6080fcccfb37c64b5ac8a408c" exitCode=0 Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.001598 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" event={"ID":"300a4df0-470a-4677-ba91-c9676e430849","Type":"ContainerDied","Data":"e68036154b1488ba6487d9deb2209f8c5693b4a6080fcccfb37c64b5ac8a408c"} Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.184406 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.287330 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-config\") pod \"300a4df0-470a-4677-ba91-c9676e430849\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.287414 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-dns-swift-storage-0\") pod \"300a4df0-470a-4677-ba91-c9676e430849\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.287487 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-dns-svc\") pod \"300a4df0-470a-4677-ba91-c9676e430849\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.287524 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-ovsdbserver-nb\") pod \"300a4df0-470a-4677-ba91-c9676e430849\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.287550 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-ovsdbserver-sb\") pod \"300a4df0-470a-4677-ba91-c9676e430849\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.287690 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8444\" (UniqueName: \"kubernetes.io/projected/300a4df0-470a-4677-ba91-c9676e430849-kube-api-access-m8444\") pod \"300a4df0-470a-4677-ba91-c9676e430849\" (UID: \"300a4df0-470a-4677-ba91-c9676e430849\") " Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.348447 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/300a4df0-470a-4677-ba91-c9676e430849-kube-api-access-m8444" (OuterVolumeSpecName: "kube-api-access-m8444") pod "300a4df0-470a-4677-ba91-c9676e430849" (UID: "300a4df0-470a-4677-ba91-c9676e430849"). InnerVolumeSpecName "kube-api-access-m8444". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.391337 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8444\" (UniqueName: \"kubernetes.io/projected/300a4df0-470a-4677-ba91-c9676e430849-kube-api-access-m8444\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.403606 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-config" (OuterVolumeSpecName: "config") pod "300a4df0-470a-4677-ba91-c9676e430849" (UID: "300a4df0-470a-4677-ba91-c9676e430849"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.454125 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "300a4df0-470a-4677-ba91-c9676e430849" (UID: "300a4df0-470a-4677-ba91-c9676e430849"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.481855 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "300a4df0-470a-4677-ba91-c9676e430849" (UID: "300a4df0-470a-4677-ba91-c9676e430849"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.484972 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "300a4df0-470a-4677-ba91-c9676e430849" (UID: "300a4df0-470a-4677-ba91-c9676e430849"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.490893 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "300a4df0-470a-4677-ba91-c9676e430849" (UID: "300a4df0-470a-4677-ba91-c9676e430849"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.494046 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.494084 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.494094 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.494103 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:14 crc kubenswrapper[4624]: I1008 14:45:14.494112 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/300a4df0-470a-4677-ba91-c9676e430849-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:15 crc kubenswrapper[4624]: I1008 14:45:15.011886 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" event={"ID":"300a4df0-470a-4677-ba91-c9676e430849","Type":"ContainerDied","Data":"ba67f3371f534420fa3aaaf9d914c15cd91b54ec86096f45987a06566f046934"} Oct 08 14:45:15 crc kubenswrapper[4624]: I1008 14:45:15.011952 4624 scope.go:117] "RemoveContainer" containerID="e68036154b1488ba6487d9deb2209f8c5693b4a6080fcccfb37c64b5ac8a408c" Oct 08 14:45:15 crc kubenswrapper[4624]: I1008 14:45:15.011963 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b7bb65955-qb2vf" Oct 08 14:45:15 crc kubenswrapper[4624]: I1008 14:45:15.059919 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b7bb65955-qb2vf"] Oct 08 14:45:15 crc kubenswrapper[4624]: I1008 14:45:15.061463 4624 scope.go:117] "RemoveContainer" containerID="27a2a9616f19d555481fe0cb4dc59c5f029a11373cf0dda125f275ce14e6effc" Oct 08 14:45:15 crc kubenswrapper[4624]: I1008 14:45:15.068326 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b7bb65955-qb2vf"] Oct 08 14:45:15 crc kubenswrapper[4624]: I1008 14:45:15.478971 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="300a4df0-470a-4677-ba91-c9676e430849" path="/var/lib/kubelet/pods/300a4df0-470a-4677-ba91-c9676e430849/volumes" Oct 08 14:45:16 crc kubenswrapper[4624]: I1008 14:45:16.025311 4624 generic.go:334] "Generic (PLEG): container finished" podID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerID="950afcb496a4929e32d8027f33e152661f94ab1988e00805c73d58bd9cf9cb93" exitCode=0 Oct 08 14:45:16 crc kubenswrapper[4624]: I1008 14:45:16.025366 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5ff15c9-ddbc-409e-b605-5e453914798f","Type":"ContainerDied","Data":"950afcb496a4929e32d8027f33e152661f94ab1988e00805c73d58bd9cf9cb93"} Oct 08 14:45:18 crc kubenswrapper[4624]: I1008 14:45:18.052292 4624 generic.go:334] "Generic (PLEG): container finished" podID="e373c0bb-06d9-45b9-ac73-eb55282374f4" containerID="d32422146bc184c711eeaf0da36e9227381abf110459af28f1479dc7033e276a" exitCode=0 Oct 08 14:45:18 crc kubenswrapper[4624]: I1008 14:45:18.052348 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-r2jkt" event={"ID":"e373c0bb-06d9-45b9-ac73-eb55282374f4","Type":"ContainerDied","Data":"d32422146bc184c711eeaf0da36e9227381abf110459af28f1479dc7033e276a"} Oct 08 14:45:18 crc kubenswrapper[4624]: I1008 14:45:18.312209 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 08 14:45:18 crc kubenswrapper[4624]: I1008 14:45:18.312587 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 08 14:45:18 crc kubenswrapper[4624]: I1008 14:45:18.334201 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 08 14:45:18 crc kubenswrapper[4624]: I1008 14:45:18.343922 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 08 14:45:19 crc kubenswrapper[4624]: I1008 14:45:19.423923 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-r2jkt" Oct 08 14:45:19 crc kubenswrapper[4624]: I1008 14:45:19.590732 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqf7h\" (UniqueName: \"kubernetes.io/projected/e373c0bb-06d9-45b9-ac73-eb55282374f4-kube-api-access-tqf7h\") pod \"e373c0bb-06d9-45b9-ac73-eb55282374f4\" (UID: \"e373c0bb-06d9-45b9-ac73-eb55282374f4\") " Oct 08 14:45:19 crc kubenswrapper[4624]: I1008 14:45:19.590895 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e373c0bb-06d9-45b9-ac73-eb55282374f4-config-data\") pod \"e373c0bb-06d9-45b9-ac73-eb55282374f4\" (UID: \"e373c0bb-06d9-45b9-ac73-eb55282374f4\") " Oct 08 14:45:19 crc kubenswrapper[4624]: I1008 14:45:19.591002 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e373c0bb-06d9-45b9-ac73-eb55282374f4-scripts\") pod \"e373c0bb-06d9-45b9-ac73-eb55282374f4\" (UID: \"e373c0bb-06d9-45b9-ac73-eb55282374f4\") " Oct 08 14:45:19 crc kubenswrapper[4624]: I1008 14:45:19.591090 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e373c0bb-06d9-45b9-ac73-eb55282374f4-combined-ca-bundle\") pod \"e373c0bb-06d9-45b9-ac73-eb55282374f4\" (UID: \"e373c0bb-06d9-45b9-ac73-eb55282374f4\") " Oct 08 14:45:19 crc kubenswrapper[4624]: I1008 14:45:19.596764 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e373c0bb-06d9-45b9-ac73-eb55282374f4-kube-api-access-tqf7h" (OuterVolumeSpecName: "kube-api-access-tqf7h") pod "e373c0bb-06d9-45b9-ac73-eb55282374f4" (UID: "e373c0bb-06d9-45b9-ac73-eb55282374f4"). InnerVolumeSpecName "kube-api-access-tqf7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:45:19 crc kubenswrapper[4624]: I1008 14:45:19.597323 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e373c0bb-06d9-45b9-ac73-eb55282374f4-scripts" (OuterVolumeSpecName: "scripts") pod "e373c0bb-06d9-45b9-ac73-eb55282374f4" (UID: "e373c0bb-06d9-45b9-ac73-eb55282374f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:19 crc kubenswrapper[4624]: I1008 14:45:19.622797 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e373c0bb-06d9-45b9-ac73-eb55282374f4-config-data" (OuterVolumeSpecName: "config-data") pod "e373c0bb-06d9-45b9-ac73-eb55282374f4" (UID: "e373c0bb-06d9-45b9-ac73-eb55282374f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:19 crc kubenswrapper[4624]: I1008 14:45:19.626530 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e373c0bb-06d9-45b9-ac73-eb55282374f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e373c0bb-06d9-45b9-ac73-eb55282374f4" (UID: "e373c0bb-06d9-45b9-ac73-eb55282374f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:19 crc kubenswrapper[4624]: I1008 14:45:19.693839 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e373c0bb-06d9-45b9-ac73-eb55282374f4-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:19 crc kubenswrapper[4624]: I1008 14:45:19.694050 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e373c0bb-06d9-45b9-ac73-eb55282374f4-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:19 crc kubenswrapper[4624]: I1008 14:45:19.694105 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e373c0bb-06d9-45b9-ac73-eb55282374f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:19 crc kubenswrapper[4624]: I1008 14:45:19.694234 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqf7h\" (UniqueName: \"kubernetes.io/projected/e373c0bb-06d9-45b9-ac73-eb55282374f4-kube-api-access-tqf7h\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:20 crc kubenswrapper[4624]: I1008 14:45:20.074030 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-r2jkt" Oct 08 14:45:20 crc kubenswrapper[4624]: I1008 14:45:20.074836 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-r2jkt" event={"ID":"e373c0bb-06d9-45b9-ac73-eb55282374f4","Type":"ContainerDied","Data":"87597e8e285263d493f1405b4ad5a6141fb67d731d89c18de413fb92074b98a3"} Oct 08 14:45:20 crc kubenswrapper[4624]: I1008 14:45:20.074873 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87597e8e285263d493f1405b4ad5a6141fb67d731d89c18de413fb92074b98a3" Oct 08 14:45:20 crc kubenswrapper[4624]: I1008 14:45:20.255031 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 14:45:20 crc kubenswrapper[4624]: I1008 14:45:20.255812 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="33fd0970-dd4e-4ec5-bbc6-f0910db1354d" containerName="nova-api-api" containerID="cri-o://2a3a0e49a147803ca5ceedfaecb9a4fa07db0c812656bd014014c9104435a05c" gracePeriod=30 Oct 08 14:45:20 crc kubenswrapper[4624]: I1008 14:45:20.257069 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="33fd0970-dd4e-4ec5-bbc6-f0910db1354d" containerName="nova-api-log" containerID="cri-o://64b0ce0e0507cacd113ec96fe6c7f84c2f175cf7c669dc5f3d0acff623d1d590" gracePeriod=30 Oct 08 14:45:20 crc kubenswrapper[4624]: I1008 14:45:20.291838 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 14:45:20 crc kubenswrapper[4624]: I1008 14:45:20.292050 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="31486d82-7d6f-47c1-9cbd-7e7f9fc23638" containerName="nova-scheduler-scheduler" containerID="cri-o://8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c" gracePeriod=30 Oct 08 14:45:20 crc kubenswrapper[4624]: I1008 14:45:20.314948 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 14:45:20 crc kubenswrapper[4624]: E1008 14:45:20.822967 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 14:45:20 crc kubenswrapper[4624]: E1008 14:45:20.825425 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 14:45:20 crc kubenswrapper[4624]: E1008 14:45:20.828746 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 14:45:20 crc kubenswrapper[4624]: E1008 14:45:20.828818 4624 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="31486d82-7d6f-47c1-9cbd-7e7f9fc23638" containerName="nova-scheduler-scheduler" Oct 08 14:45:20 crc kubenswrapper[4624]: I1008 14:45:20.884609 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.019516 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgsh5\" (UniqueName: \"kubernetes.io/projected/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-kube-api-access-sgsh5\") pod \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.019654 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-internal-tls-certs\") pod \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.019767 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-logs\") pod \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.019791 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-public-tls-certs\") pod \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.019821 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-combined-ca-bundle\") pod \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.019889 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-config-data\") pod \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\" (UID: \"33fd0970-dd4e-4ec5-bbc6-f0910db1354d\") " Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.020283 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-logs" (OuterVolumeSpecName: "logs") pod "33fd0970-dd4e-4ec5-bbc6-f0910db1354d" (UID: "33fd0970-dd4e-4ec5-bbc6-f0910db1354d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.025798 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-kube-api-access-sgsh5" (OuterVolumeSpecName: "kube-api-access-sgsh5") pod "33fd0970-dd4e-4ec5-bbc6-f0910db1354d" (UID: "33fd0970-dd4e-4ec5-bbc6-f0910db1354d"). InnerVolumeSpecName "kube-api-access-sgsh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.057971 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33fd0970-dd4e-4ec5-bbc6-f0910db1354d" (UID: "33fd0970-dd4e-4ec5-bbc6-f0910db1354d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.089214 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "33fd0970-dd4e-4ec5-bbc6-f0910db1354d" (UID: "33fd0970-dd4e-4ec5-bbc6-f0910db1354d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.093481 4624 generic.go:334] "Generic (PLEG): container finished" podID="33fd0970-dd4e-4ec5-bbc6-f0910db1354d" containerID="2a3a0e49a147803ca5ceedfaecb9a4fa07db0c812656bd014014c9104435a05c" exitCode=0 Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.093516 4624 generic.go:334] "Generic (PLEG): container finished" podID="33fd0970-dd4e-4ec5-bbc6-f0910db1354d" containerID="64b0ce0e0507cacd113ec96fe6c7f84c2f175cf7c669dc5f3d0acff623d1d590" exitCode=143 Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.093612 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"33fd0970-dd4e-4ec5-bbc6-f0910db1354d","Type":"ContainerDied","Data":"2a3a0e49a147803ca5ceedfaecb9a4fa07db0c812656bd014014c9104435a05c"} Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.093721 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="025409f2-bc81-4fb0-880c-8bb405abfdc4" containerName="nova-metadata-log" containerID="cri-o://0879020797ad83aa105e30ef9cdede5310517920443e0bdc439df9ffc04b871d" gracePeriod=30 Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.093736 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"33fd0970-dd4e-4ec5-bbc6-f0910db1354d","Type":"ContainerDied","Data":"64b0ce0e0507cacd113ec96fe6c7f84c2f175cf7c669dc5f3d0acff623d1d590"} Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.094317 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"33fd0970-dd4e-4ec5-bbc6-f0910db1354d","Type":"ContainerDied","Data":"1fd2821fad0e64f6ab7076790f2ecbe363e348d6a007a85f964887b8db4b9d6a"} Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.093799 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="025409f2-bc81-4fb0-880c-8bb405abfdc4" containerName="nova-metadata-metadata" containerID="cri-o://f6ed0d5eae4385e77f35fe4fee0903a219e0772aa21b362512b52f9488b7f6da" gracePeriod=30 Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.093666 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.093843 4624 scope.go:117] "RemoveContainer" containerID="2a3a0e49a147803ca5ceedfaecb9a4fa07db0c812656bd014014c9104435a05c" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.115656 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-config-data" (OuterVolumeSpecName: "config-data") pod "33fd0970-dd4e-4ec5-bbc6-f0910db1354d" (UID: "33fd0970-dd4e-4ec5-bbc6-f0910db1354d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.131027 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.131531 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgsh5\" (UniqueName: \"kubernetes.io/projected/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-kube-api-access-sgsh5\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.131619 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-logs\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.131732 4624 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.131801 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.133359 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "33fd0970-dd4e-4ec5-bbc6-f0910db1354d" (UID: "33fd0970-dd4e-4ec5-bbc6-f0910db1354d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.147923 4624 scope.go:117] "RemoveContainer" containerID="64b0ce0e0507cacd113ec96fe6c7f84c2f175cf7c669dc5f3d0acff623d1d590" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.165501 4624 scope.go:117] "RemoveContainer" containerID="2a3a0e49a147803ca5ceedfaecb9a4fa07db0c812656bd014014c9104435a05c" Oct 08 14:45:21 crc kubenswrapper[4624]: E1008 14:45:21.165959 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a3a0e49a147803ca5ceedfaecb9a4fa07db0c812656bd014014c9104435a05c\": container with ID starting with 2a3a0e49a147803ca5ceedfaecb9a4fa07db0c812656bd014014c9104435a05c not found: ID does not exist" containerID="2a3a0e49a147803ca5ceedfaecb9a4fa07db0c812656bd014014c9104435a05c" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.166001 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a3a0e49a147803ca5ceedfaecb9a4fa07db0c812656bd014014c9104435a05c"} err="failed to get container status \"2a3a0e49a147803ca5ceedfaecb9a4fa07db0c812656bd014014c9104435a05c\": rpc error: code = NotFound desc = could not find container \"2a3a0e49a147803ca5ceedfaecb9a4fa07db0c812656bd014014c9104435a05c\": container with ID starting with 2a3a0e49a147803ca5ceedfaecb9a4fa07db0c812656bd014014c9104435a05c not found: ID does not exist" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.166033 4624 scope.go:117] "RemoveContainer" containerID="64b0ce0e0507cacd113ec96fe6c7f84c2f175cf7c669dc5f3d0acff623d1d590" Oct 08 14:45:21 crc kubenswrapper[4624]: E1008 14:45:21.166322 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64b0ce0e0507cacd113ec96fe6c7f84c2f175cf7c669dc5f3d0acff623d1d590\": container with ID starting with 64b0ce0e0507cacd113ec96fe6c7f84c2f175cf7c669dc5f3d0acff623d1d590 not found: ID does not exist" containerID="64b0ce0e0507cacd113ec96fe6c7f84c2f175cf7c669dc5f3d0acff623d1d590" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.166348 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64b0ce0e0507cacd113ec96fe6c7f84c2f175cf7c669dc5f3d0acff623d1d590"} err="failed to get container status \"64b0ce0e0507cacd113ec96fe6c7f84c2f175cf7c669dc5f3d0acff623d1d590\": rpc error: code = NotFound desc = could not find container \"64b0ce0e0507cacd113ec96fe6c7f84c2f175cf7c669dc5f3d0acff623d1d590\": container with ID starting with 64b0ce0e0507cacd113ec96fe6c7f84c2f175cf7c669dc5f3d0acff623d1d590 not found: ID does not exist" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.166368 4624 scope.go:117] "RemoveContainer" containerID="2a3a0e49a147803ca5ceedfaecb9a4fa07db0c812656bd014014c9104435a05c" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.166603 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a3a0e49a147803ca5ceedfaecb9a4fa07db0c812656bd014014c9104435a05c"} err="failed to get container status \"2a3a0e49a147803ca5ceedfaecb9a4fa07db0c812656bd014014c9104435a05c\": rpc error: code = NotFound desc = could not find container \"2a3a0e49a147803ca5ceedfaecb9a4fa07db0c812656bd014014c9104435a05c\": container with ID starting with 2a3a0e49a147803ca5ceedfaecb9a4fa07db0c812656bd014014c9104435a05c not found: ID does not exist" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.166700 4624 scope.go:117] "RemoveContainer" containerID="64b0ce0e0507cacd113ec96fe6c7f84c2f175cf7c669dc5f3d0acff623d1d590" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.166908 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64b0ce0e0507cacd113ec96fe6c7f84c2f175cf7c669dc5f3d0acff623d1d590"} err="failed to get container status \"64b0ce0e0507cacd113ec96fe6c7f84c2f175cf7c669dc5f3d0acff623d1d590\": rpc error: code = NotFound desc = could not find container \"64b0ce0e0507cacd113ec96fe6c7f84c2f175cf7c669dc5f3d0acff623d1d590\": container with ID starting with 64b0ce0e0507cacd113ec96fe6c7f84c2f175cf7c669dc5f3d0acff623d1d590 not found: ID does not exist" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.239165 4624 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33fd0970-dd4e-4ec5-bbc6-f0910db1354d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.508405 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.526136 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.536336 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 08 14:45:21 crc kubenswrapper[4624]: E1008 14:45:21.536829 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e373c0bb-06d9-45b9-ac73-eb55282374f4" containerName="nova-manage" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.536853 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e373c0bb-06d9-45b9-ac73-eb55282374f4" containerName="nova-manage" Oct 08 14:45:21 crc kubenswrapper[4624]: E1008 14:45:21.536872 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33fd0970-dd4e-4ec5-bbc6-f0910db1354d" containerName="nova-api-api" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.536878 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="33fd0970-dd4e-4ec5-bbc6-f0910db1354d" containerName="nova-api-api" Oct 08 14:45:21 crc kubenswrapper[4624]: E1008 14:45:21.536889 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33fd0970-dd4e-4ec5-bbc6-f0910db1354d" containerName="nova-api-log" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.536895 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="33fd0970-dd4e-4ec5-bbc6-f0910db1354d" containerName="nova-api-log" Oct 08 14:45:21 crc kubenswrapper[4624]: E1008 14:45:21.536907 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="300a4df0-470a-4677-ba91-c9676e430849" containerName="dnsmasq-dns" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.537047 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="300a4df0-470a-4677-ba91-c9676e430849" containerName="dnsmasq-dns" Oct 08 14:45:21 crc kubenswrapper[4624]: E1008 14:45:21.537060 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="300a4df0-470a-4677-ba91-c9676e430849" containerName="init" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.537066 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="300a4df0-470a-4677-ba91-c9676e430849" containerName="init" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.537274 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e373c0bb-06d9-45b9-ac73-eb55282374f4" containerName="nova-manage" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.537292 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="33fd0970-dd4e-4ec5-bbc6-f0910db1354d" containerName="nova-api-api" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.537307 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="33fd0970-dd4e-4ec5-bbc6-f0910db1354d" containerName="nova-api-log" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.537326 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="300a4df0-470a-4677-ba91-c9676e430849" containerName="dnsmasq-dns" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.538348 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.546035 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cee7d40-61a2-4b4b-87d9-531196e95a8d-logs\") pod \"nova-api-0\" (UID: \"7cee7d40-61a2-4b4b-87d9-531196e95a8d\") " pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.546153 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cee7d40-61a2-4b4b-87d9-531196e95a8d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7cee7d40-61a2-4b4b-87d9-531196e95a8d\") " pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.546185 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cee7d40-61a2-4b4b-87d9-531196e95a8d-public-tls-certs\") pod \"nova-api-0\" (UID: \"7cee7d40-61a2-4b4b-87d9-531196e95a8d\") " pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.546238 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cee7d40-61a2-4b4b-87d9-531196e95a8d-config-data\") pod \"nova-api-0\" (UID: \"7cee7d40-61a2-4b4b-87d9-531196e95a8d\") " pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.546272 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cee7d40-61a2-4b4b-87d9-531196e95a8d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7cee7d40-61a2-4b4b-87d9-531196e95a8d\") " pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.546301 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rstkh\" (UniqueName: \"kubernetes.io/projected/7cee7d40-61a2-4b4b-87d9-531196e95a8d-kube-api-access-rstkh\") pod \"nova-api-0\" (UID: \"7cee7d40-61a2-4b4b-87d9-531196e95a8d\") " pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.548361 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.548704 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.549060 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.571437 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.647486 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cee7d40-61a2-4b4b-87d9-531196e95a8d-logs\") pod \"nova-api-0\" (UID: \"7cee7d40-61a2-4b4b-87d9-531196e95a8d\") " pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.647768 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cee7d40-61a2-4b4b-87d9-531196e95a8d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7cee7d40-61a2-4b4b-87d9-531196e95a8d\") " pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.647924 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cee7d40-61a2-4b4b-87d9-531196e95a8d-public-tls-certs\") pod \"nova-api-0\" (UID: \"7cee7d40-61a2-4b4b-87d9-531196e95a8d\") " pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.647966 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cee7d40-61a2-4b4b-87d9-531196e95a8d-logs\") pod \"nova-api-0\" (UID: \"7cee7d40-61a2-4b4b-87d9-531196e95a8d\") " pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.648194 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cee7d40-61a2-4b4b-87d9-531196e95a8d-config-data\") pod \"nova-api-0\" (UID: \"7cee7d40-61a2-4b4b-87d9-531196e95a8d\") " pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.648304 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cee7d40-61a2-4b4b-87d9-531196e95a8d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7cee7d40-61a2-4b4b-87d9-531196e95a8d\") " pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.648735 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rstkh\" (UniqueName: \"kubernetes.io/projected/7cee7d40-61a2-4b4b-87d9-531196e95a8d-kube-api-access-rstkh\") pod \"nova-api-0\" (UID: \"7cee7d40-61a2-4b4b-87d9-531196e95a8d\") " pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.652195 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cee7d40-61a2-4b4b-87d9-531196e95a8d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7cee7d40-61a2-4b4b-87d9-531196e95a8d\") " pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.653855 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cee7d40-61a2-4b4b-87d9-531196e95a8d-config-data\") pod \"nova-api-0\" (UID: \"7cee7d40-61a2-4b4b-87d9-531196e95a8d\") " pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.654245 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cee7d40-61a2-4b4b-87d9-531196e95a8d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7cee7d40-61a2-4b4b-87d9-531196e95a8d\") " pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.673935 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cee7d40-61a2-4b4b-87d9-531196e95a8d-public-tls-certs\") pod \"nova-api-0\" (UID: \"7cee7d40-61a2-4b4b-87d9-531196e95a8d\") " pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.675922 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rstkh\" (UniqueName: \"kubernetes.io/projected/7cee7d40-61a2-4b4b-87d9-531196e95a8d-kube-api-access-rstkh\") pod \"nova-api-0\" (UID: \"7cee7d40-61a2-4b4b-87d9-531196e95a8d\") " pod="openstack/nova-api-0" Oct 08 14:45:21 crc kubenswrapper[4624]: I1008 14:45:21.874098 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 14:45:22 crc kubenswrapper[4624]: I1008 14:45:22.112101 4624 generic.go:334] "Generic (PLEG): container finished" podID="025409f2-bc81-4fb0-880c-8bb405abfdc4" containerID="0879020797ad83aa105e30ef9cdede5310517920443e0bdc439df9ffc04b871d" exitCode=143 Oct 08 14:45:22 crc kubenswrapper[4624]: I1008 14:45:22.112179 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"025409f2-bc81-4fb0-880c-8bb405abfdc4","Type":"ContainerDied","Data":"0879020797ad83aa105e30ef9cdede5310517920443e0bdc439df9ffc04b871d"} Oct 08 14:45:22 crc kubenswrapper[4624]: I1008 14:45:22.445677 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 14:45:23 crc kubenswrapper[4624]: I1008 14:45:23.121732 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7cee7d40-61a2-4b4b-87d9-531196e95a8d","Type":"ContainerStarted","Data":"ae5be93452ce35138f5493ac81990728b6c022e3544cc2af8dd71bfcdcd8b339"} Oct 08 14:45:23 crc kubenswrapper[4624]: I1008 14:45:23.122262 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7cee7d40-61a2-4b4b-87d9-531196e95a8d","Type":"ContainerStarted","Data":"3f23bcd18385f2d58d49406d63823faaabd5c264b1d5b88f04971512e012070f"} Oct 08 14:45:23 crc kubenswrapper[4624]: I1008 14:45:23.122276 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7cee7d40-61a2-4b4b-87d9-531196e95a8d","Type":"ContainerStarted","Data":"506c4052696f825af6fb8950fff88cae31129c3191775936bf47e4188a228a0e"} Oct 08 14:45:23 crc kubenswrapper[4624]: I1008 14:45:23.139475 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.13945705 podStartE2EDuration="2.13945705s" podCreationTimestamp="2025-10-08 14:45:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:45:23.136783222 +0000 UTC m=+1348.287718299" watchObservedRunningTime="2025-10-08 14:45:23.13945705 +0000 UTC m=+1348.290392127" Oct 08 14:45:23 crc kubenswrapper[4624]: I1008 14:45:23.477262 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33fd0970-dd4e-4ec5-bbc6-f0910db1354d" path="/var/lib/kubelet/pods/33fd0970-dd4e-4ec5-bbc6-f0910db1354d/volumes" Oct 08 14:45:24 crc kubenswrapper[4624]: I1008 14:45:24.268522 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="025409f2-bc81-4fb0-880c-8bb405abfdc4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": read tcp 10.217.0.2:56858->10.217.0.212:8775: read: connection reset by peer" Oct 08 14:45:24 crc kubenswrapper[4624]: I1008 14:45:24.268896 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="025409f2-bc81-4fb0-880c-8bb405abfdc4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": read tcp 10.217.0.2:56862->10.217.0.212:8775: read: connection reset by peer" Oct 08 14:45:24 crc kubenswrapper[4624]: I1008 14:45:24.841139 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.028269 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/025409f2-bc81-4fb0-880c-8bb405abfdc4-logs\") pod \"025409f2-bc81-4fb0-880c-8bb405abfdc4\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.028335 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025409f2-bc81-4fb0-880c-8bb405abfdc4-combined-ca-bundle\") pod \"025409f2-bc81-4fb0-880c-8bb405abfdc4\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.028399 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/025409f2-bc81-4fb0-880c-8bb405abfdc4-nova-metadata-tls-certs\") pod \"025409f2-bc81-4fb0-880c-8bb405abfdc4\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.028433 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025409f2-bc81-4fb0-880c-8bb405abfdc4-config-data\") pod \"025409f2-bc81-4fb0-880c-8bb405abfdc4\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.028477 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klsqx\" (UniqueName: \"kubernetes.io/projected/025409f2-bc81-4fb0-880c-8bb405abfdc4-kube-api-access-klsqx\") pod \"025409f2-bc81-4fb0-880c-8bb405abfdc4\" (UID: \"025409f2-bc81-4fb0-880c-8bb405abfdc4\") " Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.031991 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/025409f2-bc81-4fb0-880c-8bb405abfdc4-logs" (OuterVolumeSpecName: "logs") pod "025409f2-bc81-4fb0-880c-8bb405abfdc4" (UID: "025409f2-bc81-4fb0-880c-8bb405abfdc4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.039152 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025409f2-bc81-4fb0-880c-8bb405abfdc4-kube-api-access-klsqx" (OuterVolumeSpecName: "kube-api-access-klsqx") pod "025409f2-bc81-4fb0-880c-8bb405abfdc4" (UID: "025409f2-bc81-4fb0-880c-8bb405abfdc4"). InnerVolumeSpecName "kube-api-access-klsqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.078296 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/025409f2-bc81-4fb0-880c-8bb405abfdc4-config-data" (OuterVolumeSpecName: "config-data") pod "025409f2-bc81-4fb0-880c-8bb405abfdc4" (UID: "025409f2-bc81-4fb0-880c-8bb405abfdc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.101098 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/025409f2-bc81-4fb0-880c-8bb405abfdc4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "025409f2-bc81-4fb0-880c-8bb405abfdc4" (UID: "025409f2-bc81-4fb0-880c-8bb405abfdc4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.130471 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025409f2-bc81-4fb0-880c-8bb405abfdc4-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.130513 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klsqx\" (UniqueName: \"kubernetes.io/projected/025409f2-bc81-4fb0-880c-8bb405abfdc4-kube-api-access-klsqx\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.130527 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/025409f2-bc81-4fb0-880c-8bb405abfdc4-logs\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.130538 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025409f2-bc81-4fb0-880c-8bb405abfdc4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.142886 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/025409f2-bc81-4fb0-880c-8bb405abfdc4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "025409f2-bc81-4fb0-880c-8bb405abfdc4" (UID: "025409f2-bc81-4fb0-880c-8bb405abfdc4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.156254 4624 generic.go:334] "Generic (PLEG): container finished" podID="025409f2-bc81-4fb0-880c-8bb405abfdc4" containerID="f6ed0d5eae4385e77f35fe4fee0903a219e0772aa21b362512b52f9488b7f6da" exitCode=0 Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.156297 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"025409f2-bc81-4fb0-880c-8bb405abfdc4","Type":"ContainerDied","Data":"f6ed0d5eae4385e77f35fe4fee0903a219e0772aa21b362512b52f9488b7f6da"} Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.156322 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"025409f2-bc81-4fb0-880c-8bb405abfdc4","Type":"ContainerDied","Data":"afbaf2bb5bf168c0af485c9f4daf4f63c608469499da9a2725133a1645f1b425"} Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.156338 4624 scope.go:117] "RemoveContainer" containerID="f6ed0d5eae4385e77f35fe4fee0903a219e0772aa21b362512b52f9488b7f6da" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.156500 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.198546 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.198962 4624 scope.go:117] "RemoveContainer" containerID="0879020797ad83aa105e30ef9cdede5310517920443e0bdc439df9ffc04b871d" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.210082 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.223098 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 08 14:45:25 crc kubenswrapper[4624]: E1008 14:45:25.223501 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025409f2-bc81-4fb0-880c-8bb405abfdc4" containerName="nova-metadata-log" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.223515 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="025409f2-bc81-4fb0-880c-8bb405abfdc4" containerName="nova-metadata-log" Oct 08 14:45:25 crc kubenswrapper[4624]: E1008 14:45:25.223536 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025409f2-bc81-4fb0-880c-8bb405abfdc4" containerName="nova-metadata-metadata" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.223542 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="025409f2-bc81-4fb0-880c-8bb405abfdc4" containerName="nova-metadata-metadata" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.223729 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="025409f2-bc81-4fb0-880c-8bb405abfdc4" containerName="nova-metadata-metadata" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.223745 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="025409f2-bc81-4fb0-880c-8bb405abfdc4" containerName="nova-metadata-log" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.225333 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.227755 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.231062 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.234092 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf362794-4a7e-483b-814b-d73b53e9f28f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cf362794-4a7e-483b-814b-d73b53e9f28f\") " pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.234206 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf362794-4a7e-483b-814b-d73b53e9f28f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cf362794-4a7e-483b-814b-d73b53e9f28f\") " pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.234293 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf362794-4a7e-483b-814b-d73b53e9f28f-logs\") pod \"nova-metadata-0\" (UID: \"cf362794-4a7e-483b-814b-d73b53e9f28f\") " pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.234361 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4kwf\" (UniqueName: \"kubernetes.io/projected/cf362794-4a7e-483b-814b-d73b53e9f28f-kube-api-access-h4kwf\") pod \"nova-metadata-0\" (UID: \"cf362794-4a7e-483b-814b-d73b53e9f28f\") " pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.234393 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf362794-4a7e-483b-814b-d73b53e9f28f-config-data\") pod \"nova-metadata-0\" (UID: \"cf362794-4a7e-483b-814b-d73b53e9f28f\") " pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.234531 4624 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/025409f2-bc81-4fb0-880c-8bb405abfdc4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.261289 4624 scope.go:117] "RemoveContainer" containerID="f6ed0d5eae4385e77f35fe4fee0903a219e0772aa21b362512b52f9488b7f6da" Oct 08 14:45:25 crc kubenswrapper[4624]: E1008 14:45:25.268860 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6ed0d5eae4385e77f35fe4fee0903a219e0772aa21b362512b52f9488b7f6da\": container with ID starting with f6ed0d5eae4385e77f35fe4fee0903a219e0772aa21b362512b52f9488b7f6da not found: ID does not exist" containerID="f6ed0d5eae4385e77f35fe4fee0903a219e0772aa21b362512b52f9488b7f6da" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.268912 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6ed0d5eae4385e77f35fe4fee0903a219e0772aa21b362512b52f9488b7f6da"} err="failed to get container status \"f6ed0d5eae4385e77f35fe4fee0903a219e0772aa21b362512b52f9488b7f6da\": rpc error: code = NotFound desc = could not find container \"f6ed0d5eae4385e77f35fe4fee0903a219e0772aa21b362512b52f9488b7f6da\": container with ID starting with f6ed0d5eae4385e77f35fe4fee0903a219e0772aa21b362512b52f9488b7f6da not found: ID does not exist" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.268936 4624 scope.go:117] "RemoveContainer" containerID="0879020797ad83aa105e30ef9cdede5310517920443e0bdc439df9ffc04b871d" Oct 08 14:45:25 crc kubenswrapper[4624]: E1008 14:45:25.271018 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0879020797ad83aa105e30ef9cdede5310517920443e0bdc439df9ffc04b871d\": container with ID starting with 0879020797ad83aa105e30ef9cdede5310517920443e0bdc439df9ffc04b871d not found: ID does not exist" containerID="0879020797ad83aa105e30ef9cdede5310517920443e0bdc439df9ffc04b871d" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.271058 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0879020797ad83aa105e30ef9cdede5310517920443e0bdc439df9ffc04b871d"} err="failed to get container status \"0879020797ad83aa105e30ef9cdede5310517920443e0bdc439df9ffc04b871d\": rpc error: code = NotFound desc = could not find container \"0879020797ad83aa105e30ef9cdede5310517920443e0bdc439df9ffc04b871d\": container with ID starting with 0879020797ad83aa105e30ef9cdede5310517920443e0bdc439df9ffc04b871d not found: ID does not exist" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.275877 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.335605 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf362794-4a7e-483b-814b-d73b53e9f28f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cf362794-4a7e-483b-814b-d73b53e9f28f\") " pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.335729 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf362794-4a7e-483b-814b-d73b53e9f28f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cf362794-4a7e-483b-814b-d73b53e9f28f\") " pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.335768 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf362794-4a7e-483b-814b-d73b53e9f28f-logs\") pod \"nova-metadata-0\" (UID: \"cf362794-4a7e-483b-814b-d73b53e9f28f\") " pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.335802 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4kwf\" (UniqueName: \"kubernetes.io/projected/cf362794-4a7e-483b-814b-d73b53e9f28f-kube-api-access-h4kwf\") pod \"nova-metadata-0\" (UID: \"cf362794-4a7e-483b-814b-d73b53e9f28f\") " pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.335831 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf362794-4a7e-483b-814b-d73b53e9f28f-config-data\") pod \"nova-metadata-0\" (UID: \"cf362794-4a7e-483b-814b-d73b53e9f28f\") " pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.336625 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf362794-4a7e-483b-814b-d73b53e9f28f-logs\") pod \"nova-metadata-0\" (UID: \"cf362794-4a7e-483b-814b-d73b53e9f28f\") " pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.340176 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf362794-4a7e-483b-814b-d73b53e9f28f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cf362794-4a7e-483b-814b-d73b53e9f28f\") " pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.340349 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf362794-4a7e-483b-814b-d73b53e9f28f-config-data\") pod \"nova-metadata-0\" (UID: \"cf362794-4a7e-483b-814b-d73b53e9f28f\") " pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.341034 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf362794-4a7e-483b-814b-d73b53e9f28f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cf362794-4a7e-483b-814b-d73b53e9f28f\") " pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.359205 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4kwf\" (UniqueName: \"kubernetes.io/projected/cf362794-4a7e-483b-814b-d73b53e9f28f-kube-api-access-h4kwf\") pod \"nova-metadata-0\" (UID: \"cf362794-4a7e-483b-814b-d73b53e9f28f\") " pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.480139 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="025409f2-bc81-4fb0-880c-8bb405abfdc4" path="/var/lib/kubelet/pods/025409f2-bc81-4fb0-880c-8bb405abfdc4/volumes" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.552278 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 14:45:25 crc kubenswrapper[4624]: E1008 14:45:25.821311 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c is running failed: container process not found" containerID="8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 14:45:25 crc kubenswrapper[4624]: E1008 14:45:25.829004 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c is running failed: container process not found" containerID="8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 14:45:25 crc kubenswrapper[4624]: E1008 14:45:25.830268 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c is running failed: container process not found" containerID="8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 14:45:25 crc kubenswrapper[4624]: E1008 14:45:25.830306 4624 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="31486d82-7d6f-47c1-9cbd-7e7f9fc23638" containerName="nova-scheduler-scheduler" Oct 08 14:45:25 crc kubenswrapper[4624]: I1008 14:45:25.899332 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.053621 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31486d82-7d6f-47c1-9cbd-7e7f9fc23638-config-data\") pod \"31486d82-7d6f-47c1-9cbd-7e7f9fc23638\" (UID: \"31486d82-7d6f-47c1-9cbd-7e7f9fc23638\") " Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.053789 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g8cv\" (UniqueName: \"kubernetes.io/projected/31486d82-7d6f-47c1-9cbd-7e7f9fc23638-kube-api-access-5g8cv\") pod \"31486d82-7d6f-47c1-9cbd-7e7f9fc23638\" (UID: \"31486d82-7d6f-47c1-9cbd-7e7f9fc23638\") " Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.054721 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31486d82-7d6f-47c1-9cbd-7e7f9fc23638-combined-ca-bundle\") pod \"31486d82-7d6f-47c1-9cbd-7e7f9fc23638\" (UID: \"31486d82-7d6f-47c1-9cbd-7e7f9fc23638\") " Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.060551 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31486d82-7d6f-47c1-9cbd-7e7f9fc23638-kube-api-access-5g8cv" (OuterVolumeSpecName: "kube-api-access-5g8cv") pod "31486d82-7d6f-47c1-9cbd-7e7f9fc23638" (UID: "31486d82-7d6f-47c1-9cbd-7e7f9fc23638"). InnerVolumeSpecName "kube-api-access-5g8cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.091626 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31486d82-7d6f-47c1-9cbd-7e7f9fc23638-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31486d82-7d6f-47c1-9cbd-7e7f9fc23638" (UID: "31486d82-7d6f-47c1-9cbd-7e7f9fc23638"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.098076 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31486d82-7d6f-47c1-9cbd-7e7f9fc23638-config-data" (OuterVolumeSpecName: "config-data") pod "31486d82-7d6f-47c1-9cbd-7e7f9fc23638" (UID: "31486d82-7d6f-47c1-9cbd-7e7f9fc23638"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.156957 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31486d82-7d6f-47c1-9cbd-7e7f9fc23638-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.156991 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g8cv\" (UniqueName: \"kubernetes.io/projected/31486d82-7d6f-47c1-9cbd-7e7f9fc23638-kube-api-access-5g8cv\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.157000 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31486d82-7d6f-47c1-9cbd-7e7f9fc23638-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.165921 4624 generic.go:334] "Generic (PLEG): container finished" podID="31486d82-7d6f-47c1-9cbd-7e7f9fc23638" containerID="8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c" exitCode=0 Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.165960 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"31486d82-7d6f-47c1-9cbd-7e7f9fc23638","Type":"ContainerDied","Data":"8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c"} Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.166017 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"31486d82-7d6f-47c1-9cbd-7e7f9fc23638","Type":"ContainerDied","Data":"a714080266758233507b8f7d3751e7b505aa383730fda3a1d21fe0a4f99ee0ac"} Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.166037 4624 scope.go:117] "RemoveContainer" containerID="8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.166411 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.205173 4624 scope.go:117] "RemoveContainer" containerID="8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c" Oct 08 14:45:26 crc kubenswrapper[4624]: E1008 14:45:26.208120 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c\": container with ID starting with 8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c not found: ID does not exist" containerID="8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.208153 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c"} err="failed to get container status \"8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c\": rpc error: code = NotFound desc = could not find container \"8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c\": container with ID starting with 8eb7ddd749a47343b35925eb7dd37345c40e03195aff7051f4cb9ca1a8661d4c not found: ID does not exist" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.215019 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.233458 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.249033 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 14:45:26 crc kubenswrapper[4624]: E1008 14:45:26.249459 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31486d82-7d6f-47c1-9cbd-7e7f9fc23638" containerName="nova-scheduler-scheduler" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.249475 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="31486d82-7d6f-47c1-9cbd-7e7f9fc23638" containerName="nova-scheduler-scheduler" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.249687 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="31486d82-7d6f-47c1-9cbd-7e7f9fc23638" containerName="nova-scheduler-scheduler" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.250305 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.252029 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.260785 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.262306 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn5m4\" (UniqueName: \"kubernetes.io/projected/e7163bb9-301b-4539-ae0d-099caa9bd36b-kube-api-access-gn5m4\") pod \"nova-scheduler-0\" (UID: \"e7163bb9-301b-4539-ae0d-099caa9bd36b\") " pod="openstack/nova-scheduler-0" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.262491 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7163bb9-301b-4539-ae0d-099caa9bd36b-config-data\") pod \"nova-scheduler-0\" (UID: \"e7163bb9-301b-4539-ae0d-099caa9bd36b\") " pod="openstack/nova-scheduler-0" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.262534 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7163bb9-301b-4539-ae0d-099caa9bd36b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7163bb9-301b-4539-ae0d-099caa9bd36b\") " pod="openstack/nova-scheduler-0" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.269539 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.364863 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn5m4\" (UniqueName: \"kubernetes.io/projected/e7163bb9-301b-4539-ae0d-099caa9bd36b-kube-api-access-gn5m4\") pod \"nova-scheduler-0\" (UID: \"e7163bb9-301b-4539-ae0d-099caa9bd36b\") " pod="openstack/nova-scheduler-0" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.364940 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7163bb9-301b-4539-ae0d-099caa9bd36b-config-data\") pod \"nova-scheduler-0\" (UID: \"e7163bb9-301b-4539-ae0d-099caa9bd36b\") " pod="openstack/nova-scheduler-0" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.364980 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7163bb9-301b-4539-ae0d-099caa9bd36b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7163bb9-301b-4539-ae0d-099caa9bd36b\") " pod="openstack/nova-scheduler-0" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.371153 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7163bb9-301b-4539-ae0d-099caa9bd36b-config-data\") pod \"nova-scheduler-0\" (UID: \"e7163bb9-301b-4539-ae0d-099caa9bd36b\") " pod="openstack/nova-scheduler-0" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.371354 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7163bb9-301b-4539-ae0d-099caa9bd36b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7163bb9-301b-4539-ae0d-099caa9bd36b\") " pod="openstack/nova-scheduler-0" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.382857 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn5m4\" (UniqueName: \"kubernetes.io/projected/e7163bb9-301b-4539-ae0d-099caa9bd36b-kube-api-access-gn5m4\") pod \"nova-scheduler-0\" (UID: \"e7163bb9-301b-4539-ae0d-099caa9bd36b\") " pod="openstack/nova-scheduler-0" Oct 08 14:45:26 crc kubenswrapper[4624]: I1008 14:45:26.586391 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 14:45:27 crc kubenswrapper[4624]: I1008 14:45:27.182609 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf362794-4a7e-483b-814b-d73b53e9f28f","Type":"ContainerStarted","Data":"5471518fe6a836d755bacb5e9f32f24d13472d2c364e7567feb8a7fd07634355"} Oct 08 14:45:27 crc kubenswrapper[4624]: I1008 14:45:27.184049 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf362794-4a7e-483b-814b-d73b53e9f28f","Type":"ContainerStarted","Data":"538487ea8023cca8a0b22a2ef381bdc14fbbf6218fc12348deeafd4c9e2fc4f4"} Oct 08 14:45:27 crc kubenswrapper[4624]: I1008 14:45:27.310696 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 14:45:27 crc kubenswrapper[4624]: I1008 14:45:27.478742 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31486d82-7d6f-47c1-9cbd-7e7f9fc23638" path="/var/lib/kubelet/pods/31486d82-7d6f-47c1-9cbd-7e7f9fc23638/volumes" Oct 08 14:45:28 crc kubenswrapper[4624]: I1008 14:45:28.203497 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf362794-4a7e-483b-814b-d73b53e9f28f","Type":"ContainerStarted","Data":"dd0a3b72a26ef3d1f721ae1be82613c137c927ede3cfdfc68ecb8fa54cd1c825"} Oct 08 14:45:28 crc kubenswrapper[4624]: I1008 14:45:28.207236 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7163bb9-301b-4539-ae0d-099caa9bd36b","Type":"ContainerStarted","Data":"070a7ff703c2d3b7455e76bbc01aa2095b7ef836a38008e69b105413c340ffee"} Oct 08 14:45:28 crc kubenswrapper[4624]: I1008 14:45:28.207279 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7163bb9-301b-4539-ae0d-099caa9bd36b","Type":"ContainerStarted","Data":"dfc4db77ab2cefe55dc1c6fa4354f51cad44ee3e9be25d8715e3ee05287815c1"} Oct 08 14:45:28 crc kubenswrapper[4624]: I1008 14:45:28.229301 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.229278943 podStartE2EDuration="3.229278943s" podCreationTimestamp="2025-10-08 14:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:45:28.222865321 +0000 UTC m=+1353.373800398" watchObservedRunningTime="2025-10-08 14:45:28.229278943 +0000 UTC m=+1353.380214020" Oct 08 14:45:28 crc kubenswrapper[4624]: I1008 14:45:28.243042 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.243026209 podStartE2EDuration="2.243026209s" podCreationTimestamp="2025-10-08 14:45:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:45:28.24067582 +0000 UTC m=+1353.391610917" watchObservedRunningTime="2025-10-08 14:45:28.243026209 +0000 UTC m=+1353.393961286" Oct 08 14:45:30 crc kubenswrapper[4624]: I1008 14:45:30.077061 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:45:30 crc kubenswrapper[4624]: I1008 14:45:30.077120 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:45:30 crc kubenswrapper[4624]: I1008 14:45:30.552382 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 14:45:30 crc kubenswrapper[4624]: I1008 14:45:30.552473 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 14:45:31 crc kubenswrapper[4624]: I1008 14:45:31.587421 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 08 14:45:31 crc kubenswrapper[4624]: I1008 14:45:31.874968 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 14:45:31 crc kubenswrapper[4624]: I1008 14:45:31.875022 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 14:45:32 crc kubenswrapper[4624]: I1008 14:45:32.886945 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7cee7d40-61a2-4b4b-87d9-531196e95a8d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.218:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 14:45:32 crc kubenswrapper[4624]: I1008 14:45:32.887567 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7cee7d40-61a2-4b4b-87d9-531196e95a8d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.218:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 14:45:35 crc kubenswrapper[4624]: I1008 14:45:35.552485 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 08 14:45:35 crc kubenswrapper[4624]: I1008 14:45:35.553722 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 08 14:45:36 crc kubenswrapper[4624]: I1008 14:45:36.563839 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cf362794-4a7e-483b-814b-d73b53e9f28f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 14:45:36 crc kubenswrapper[4624]: I1008 14:45:36.563855 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cf362794-4a7e-483b-814b-d73b53e9f28f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 14:45:36 crc kubenswrapper[4624]: I1008 14:45:36.587203 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 08 14:45:36 crc kubenswrapper[4624]: I1008 14:45:36.620886 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 08 14:45:37 crc kubenswrapper[4624]: I1008 14:45:37.242046 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Oct 08 14:45:37 crc kubenswrapper[4624]: I1008 14:45:37.312217 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 08 14:45:41 crc kubenswrapper[4624]: I1008 14:45:41.880522 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 08 14:45:41 crc kubenswrapper[4624]: I1008 14:45:41.881453 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 14:45:41 crc kubenswrapper[4624]: I1008 14:45:41.891070 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 08 14:45:41 crc kubenswrapper[4624]: I1008 14:45:41.891749 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.335434 4624 generic.go:334] "Generic (PLEG): container finished" podID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerID="c58264e3859f51bc0ec389406c1bbbfcabe52e61cbf18b29d12e3d27aedd2a4c" exitCode=137 Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.337500 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5ff15c9-ddbc-409e-b605-5e453914798f","Type":"ContainerDied","Data":"c58264e3859f51bc0ec389406c1bbbfcabe52e61cbf18b29d12e3d27aedd2a4c"} Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.337569 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.347270 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.594411 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.714810 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5ff15c9-ddbc-409e-b605-5e453914798f-log-httpd\") pod \"f5ff15c9-ddbc-409e-b605-5e453914798f\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.714912 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-scripts\") pod \"f5ff15c9-ddbc-409e-b605-5e453914798f\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.714976 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-sg-core-conf-yaml\") pod \"f5ff15c9-ddbc-409e-b605-5e453914798f\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.715016 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-combined-ca-bundle\") pod \"f5ff15c9-ddbc-409e-b605-5e453914798f\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.715157 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-config-data\") pod \"f5ff15c9-ddbc-409e-b605-5e453914798f\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.715202 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-ceilometer-tls-certs\") pod \"f5ff15c9-ddbc-409e-b605-5e453914798f\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.715247 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5ff15c9-ddbc-409e-b605-5e453914798f-run-httpd\") pod \"f5ff15c9-ddbc-409e-b605-5e453914798f\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.715302 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm24p\" (UniqueName: \"kubernetes.io/projected/f5ff15c9-ddbc-409e-b605-5e453914798f-kube-api-access-nm24p\") pod \"f5ff15c9-ddbc-409e-b605-5e453914798f\" (UID: \"f5ff15c9-ddbc-409e-b605-5e453914798f\") " Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.715432 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ff15c9-ddbc-409e-b605-5e453914798f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f5ff15c9-ddbc-409e-b605-5e453914798f" (UID: "f5ff15c9-ddbc-409e-b605-5e453914798f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.715946 4624 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5ff15c9-ddbc-409e-b605-5e453914798f-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.716003 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ff15c9-ddbc-409e-b605-5e453914798f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f5ff15c9-ddbc-409e-b605-5e453914798f" (UID: "f5ff15c9-ddbc-409e-b605-5e453914798f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.721204 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-scripts" (OuterVolumeSpecName: "scripts") pod "f5ff15c9-ddbc-409e-b605-5e453914798f" (UID: "f5ff15c9-ddbc-409e-b605-5e453914798f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.731554 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5ff15c9-ddbc-409e-b605-5e453914798f-kube-api-access-nm24p" (OuterVolumeSpecName: "kube-api-access-nm24p") pod "f5ff15c9-ddbc-409e-b605-5e453914798f" (UID: "f5ff15c9-ddbc-409e-b605-5e453914798f"). InnerVolumeSpecName "kube-api-access-nm24p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.804130 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f5ff15c9-ddbc-409e-b605-5e453914798f" (UID: "f5ff15c9-ddbc-409e-b605-5e453914798f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.811260 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f5ff15c9-ddbc-409e-b605-5e453914798f" (UID: "f5ff15c9-ddbc-409e-b605-5e453914798f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.820845 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.820876 4624 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.820888 4624 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.820899 4624 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5ff15c9-ddbc-409e-b605-5e453914798f-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.820908 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm24p\" (UniqueName: \"kubernetes.io/projected/f5ff15c9-ddbc-409e-b605-5e453914798f-kube-api-access-nm24p\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.858263 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5ff15c9-ddbc-409e-b605-5e453914798f" (UID: "f5ff15c9-ddbc-409e-b605-5e453914798f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.866941 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-config-data" (OuterVolumeSpecName: "config-data") pod "f5ff15c9-ddbc-409e-b605-5e453914798f" (UID: "f5ff15c9-ddbc-409e-b605-5e453914798f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.922869 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:42 crc kubenswrapper[4624]: I1008 14:45:42.924106 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5ff15c9-ddbc-409e-b605-5e453914798f-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.347686 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5ff15c9-ddbc-409e-b605-5e453914798f","Type":"ContainerDied","Data":"253bb22101ddc314ec7911970daa129ca94a58fee276a9627c13ff2faa69a8e7"} Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.347733 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.347760 4624 scope.go:117] "RemoveContainer" containerID="c58264e3859f51bc0ec389406c1bbbfcabe52e61cbf18b29d12e3d27aedd2a4c" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.379722 4624 scope.go:117] "RemoveContainer" containerID="27e97760b5d94789861ab68a09394da323fbfe3fc45df52aee39f9b7e8453975" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.381128 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.389758 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.408722 4624 scope.go:117] "RemoveContainer" containerID="d74195a84662b63193da4b8d0b1d6488582263fd09565c6b09b6b8f909adb530" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.418743 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:45:43 crc kubenswrapper[4624]: E1008 14:45:43.419165 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerName="proxy-httpd" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.419181 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerName="proxy-httpd" Oct 08 14:45:43 crc kubenswrapper[4624]: E1008 14:45:43.419222 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerName="ceilometer-central-agent" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.419229 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerName="ceilometer-central-agent" Oct 08 14:45:43 crc kubenswrapper[4624]: E1008 14:45:43.419241 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerName="ceilometer-notification-agent" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.419248 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerName="ceilometer-notification-agent" Oct 08 14:45:43 crc kubenswrapper[4624]: E1008 14:45:43.419263 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerName="sg-core" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.419269 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerName="sg-core" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.419451 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerName="ceilometer-notification-agent" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.419473 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerName="sg-core" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.419487 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerName="ceilometer-central-agent" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.419497 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5ff15c9-ddbc-409e-b605-5e453914798f" containerName="proxy-httpd" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.426004 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.428574 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.428713 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.429893 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.446968 4624 scope.go:117] "RemoveContainer" containerID="950afcb496a4929e32d8027f33e152661f94ab1988e00805c73d58bd9cf9cb93" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.454873 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.484290 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5ff15c9-ddbc-409e-b605-5e453914798f" path="/var/lib/kubelet/pods/f5ff15c9-ddbc-409e-b605-5e453914798f/volumes" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.535667 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.535930 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-run-httpd\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.536108 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsrqr\" (UniqueName: \"kubernetes.io/projected/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-kube-api-access-fsrqr\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.536258 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-config-data\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.537029 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.537127 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.537268 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-log-httpd\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.537366 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-scripts\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.640048 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-config-data\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.640600 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.642012 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.642888 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-log-httpd\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.643030 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-scripts\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.643325 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-log-httpd\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.643343 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.643470 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-run-httpd\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.643744 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsrqr\" (UniqueName: \"kubernetes.io/projected/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-kube-api-access-fsrqr\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.644869 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-run-httpd\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.645752 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-config-data\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.646497 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.648693 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.649969 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.655400 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-scripts\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.669913 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsrqr\" (UniqueName: \"kubernetes.io/projected/5bfc80be-dbc0-42c4-b493-3b5747d4ccb8-kube-api-access-fsrqr\") pod \"ceilometer-0\" (UID: \"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8\") " pod="openstack/ceilometer-0" Oct 08 14:45:43 crc kubenswrapper[4624]: I1008 14:45:43.741104 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 14:45:44 crc kubenswrapper[4624]: I1008 14:45:44.208603 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 14:45:44 crc kubenswrapper[4624]: W1008 14:45:44.213059 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bfc80be_dbc0_42c4_b493_3b5747d4ccb8.slice/crio-b3331293deebb23c41eb8462b7b829fa8636980ea908e90faeb6c261f622e275 WatchSource:0}: Error finding container b3331293deebb23c41eb8462b7b829fa8636980ea908e90faeb6c261f622e275: Status 404 returned error can't find the container with id b3331293deebb23c41eb8462b7b829fa8636980ea908e90faeb6c261f622e275 Oct 08 14:45:44 crc kubenswrapper[4624]: I1008 14:45:44.382061 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8","Type":"ContainerStarted","Data":"b3331293deebb23c41eb8462b7b829fa8636980ea908e90faeb6c261f622e275"} Oct 08 14:45:45 crc kubenswrapper[4624]: I1008 14:45:45.406711 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8","Type":"ContainerStarted","Data":"a2dd54c2dd4b4d4938e8a01021252c317ba1dbf1696a7aeaa642d776a50b1822"} Oct 08 14:45:45 crc kubenswrapper[4624]: I1008 14:45:45.558561 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 08 14:45:45 crc kubenswrapper[4624]: I1008 14:45:45.558673 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 08 14:45:45 crc kubenswrapper[4624]: I1008 14:45:45.564997 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 08 14:45:45 crc kubenswrapper[4624]: I1008 14:45:45.565438 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 08 14:45:47 crc kubenswrapper[4624]: I1008 14:45:47.425532 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8","Type":"ContainerStarted","Data":"9d6b8da187a172a90a0121e11bb356683d61dba24c02c077b9904dd68b8462cf"} Oct 08 14:45:49 crc kubenswrapper[4624]: I1008 14:45:49.443288 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8","Type":"ContainerStarted","Data":"359d9b3388c230e859c941e31e393accec094e4931235f2b4715ab3f0e4fb38a"} Oct 08 14:45:53 crc kubenswrapper[4624]: I1008 14:45:53.491981 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5bfc80be-dbc0-42c4-b493-3b5747d4ccb8","Type":"ContainerStarted","Data":"c909b997b4343c38f313ba304b49823ea28dba04083ce444cd967bc8ade7fa18"} Oct 08 14:45:53 crc kubenswrapper[4624]: I1008 14:45:53.493564 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 14:45:53 crc kubenswrapper[4624]: I1008 14:45:53.518593 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.362072919 podStartE2EDuration="10.518568118s" podCreationTimestamp="2025-10-08 14:45:43 +0000 UTC" firstStartedPulling="2025-10-08 14:45:44.215746603 +0000 UTC m=+1369.366681680" lastFinishedPulling="2025-10-08 14:45:52.372241802 +0000 UTC m=+1377.523176879" observedRunningTime="2025-10-08 14:45:53.511803617 +0000 UTC m=+1378.662738694" watchObservedRunningTime="2025-10-08 14:45:53.518568118 +0000 UTC m=+1378.669503195" Oct 08 14:46:00 crc kubenswrapper[4624]: I1008 14:46:00.077138 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:46:00 crc kubenswrapper[4624]: I1008 14:46:00.077813 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:46:00 crc kubenswrapper[4624]: I1008 14:46:00.077889 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:46:00 crc kubenswrapper[4624]: I1008 14:46:00.078428 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b715151b06e81079e843f1e4731eae829cdb61d676150f9179acbd18ff769765"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:46:00 crc kubenswrapper[4624]: I1008 14:46:00.078480 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://b715151b06e81079e843f1e4731eae829cdb61d676150f9179acbd18ff769765" gracePeriod=600 Oct 08 14:46:00 crc kubenswrapper[4624]: I1008 14:46:00.558234 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="b715151b06e81079e843f1e4731eae829cdb61d676150f9179acbd18ff769765" exitCode=0 Oct 08 14:46:00 crc kubenswrapper[4624]: I1008 14:46:00.558317 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"b715151b06e81079e843f1e4731eae829cdb61d676150f9179acbd18ff769765"} Oct 08 14:46:00 crc kubenswrapper[4624]: I1008 14:46:00.558583 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c"} Oct 08 14:46:00 crc kubenswrapper[4624]: I1008 14:46:00.558609 4624 scope.go:117] "RemoveContainer" containerID="480254b7c9a360984d529299bb00cc5c3bed986a85e8add5205713881a71a8d6" Oct 08 14:46:13 crc kubenswrapper[4624]: I1008 14:46:13.758044 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 08 14:46:25 crc kubenswrapper[4624]: I1008 14:46:25.242580 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 14:46:26 crc kubenswrapper[4624]: I1008 14:46:26.144174 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 14:46:26 crc kubenswrapper[4624]: I1008 14:46:26.703278 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7d9jt"] Oct 08 14:46:26 crc kubenswrapper[4624]: I1008 14:46:26.705746 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7d9jt" Oct 08 14:46:26 crc kubenswrapper[4624]: I1008 14:46:26.729754 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7d9jt"] Oct 08 14:46:26 crc kubenswrapper[4624]: I1008 14:46:26.780210 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9aaea6-b976-4954-9a1a-775893138c39-utilities\") pod \"redhat-operators-7d9jt\" (UID: \"1a9aaea6-b976-4954-9a1a-775893138c39\") " pod="openshift-marketplace/redhat-operators-7d9jt" Oct 08 14:46:26 crc kubenswrapper[4624]: I1008 14:46:26.780452 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9aaea6-b976-4954-9a1a-775893138c39-catalog-content\") pod \"redhat-operators-7d9jt\" (UID: \"1a9aaea6-b976-4954-9a1a-775893138c39\") " pod="openshift-marketplace/redhat-operators-7d9jt" Oct 08 14:46:26 crc kubenswrapper[4624]: I1008 14:46:26.780494 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx7gf\" (UniqueName: \"kubernetes.io/projected/1a9aaea6-b976-4954-9a1a-775893138c39-kube-api-access-hx7gf\") pod \"redhat-operators-7d9jt\" (UID: \"1a9aaea6-b976-4954-9a1a-775893138c39\") " pod="openshift-marketplace/redhat-operators-7d9jt" Oct 08 14:46:26 crc kubenswrapper[4624]: I1008 14:46:26.882437 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9aaea6-b976-4954-9a1a-775893138c39-utilities\") pod \"redhat-operators-7d9jt\" (UID: \"1a9aaea6-b976-4954-9a1a-775893138c39\") " pod="openshift-marketplace/redhat-operators-7d9jt" Oct 08 14:46:26 crc kubenswrapper[4624]: I1008 14:46:26.882554 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9aaea6-b976-4954-9a1a-775893138c39-catalog-content\") pod \"redhat-operators-7d9jt\" (UID: \"1a9aaea6-b976-4954-9a1a-775893138c39\") " pod="openshift-marketplace/redhat-operators-7d9jt" Oct 08 14:46:26 crc kubenswrapper[4624]: I1008 14:46:26.882584 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx7gf\" (UniqueName: \"kubernetes.io/projected/1a9aaea6-b976-4954-9a1a-775893138c39-kube-api-access-hx7gf\") pod \"redhat-operators-7d9jt\" (UID: \"1a9aaea6-b976-4954-9a1a-775893138c39\") " pod="openshift-marketplace/redhat-operators-7d9jt" Oct 08 14:46:26 crc kubenswrapper[4624]: I1008 14:46:26.883086 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9aaea6-b976-4954-9a1a-775893138c39-utilities\") pod \"redhat-operators-7d9jt\" (UID: \"1a9aaea6-b976-4954-9a1a-775893138c39\") " pod="openshift-marketplace/redhat-operators-7d9jt" Oct 08 14:46:26 crc kubenswrapper[4624]: I1008 14:46:26.883168 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9aaea6-b976-4954-9a1a-775893138c39-catalog-content\") pod \"redhat-operators-7d9jt\" (UID: \"1a9aaea6-b976-4954-9a1a-775893138c39\") " pod="openshift-marketplace/redhat-operators-7d9jt" Oct 08 14:46:26 crc kubenswrapper[4624]: I1008 14:46:26.912520 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx7gf\" (UniqueName: \"kubernetes.io/projected/1a9aaea6-b976-4954-9a1a-775893138c39-kube-api-access-hx7gf\") pod \"redhat-operators-7d9jt\" (UID: \"1a9aaea6-b976-4954-9a1a-775893138c39\") " pod="openshift-marketplace/redhat-operators-7d9jt" Oct 08 14:46:27 crc kubenswrapper[4624]: I1008 14:46:27.025010 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7d9jt" Oct 08 14:46:27 crc kubenswrapper[4624]: I1008 14:46:27.579395 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7d9jt"] Oct 08 14:46:27 crc kubenswrapper[4624]: I1008 14:46:27.851153 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7d9jt" event={"ID":"1a9aaea6-b976-4954-9a1a-775893138c39","Type":"ContainerStarted","Data":"8e3ff4e98aaf5add5735ae9178ae607df9d83bcedc8ce5cb78a31fc04a1960bf"} Oct 08 14:46:27 crc kubenswrapper[4624]: I1008 14:46:27.851197 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7d9jt" event={"ID":"1a9aaea6-b976-4954-9a1a-775893138c39","Type":"ContainerStarted","Data":"de838b4b78c243e7e2ae0ba017f99ec1106ccb69f03abe754ef4b2a84465c8d3"} Oct 08 14:46:28 crc kubenswrapper[4624]: I1008 14:46:28.887851 4624 generic.go:334] "Generic (PLEG): container finished" podID="1a9aaea6-b976-4954-9a1a-775893138c39" containerID="8e3ff4e98aaf5add5735ae9178ae607df9d83bcedc8ce5cb78a31fc04a1960bf" exitCode=0 Oct 08 14:46:28 crc kubenswrapper[4624]: I1008 14:46:28.888475 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7d9jt" event={"ID":"1a9aaea6-b976-4954-9a1a-775893138c39","Type":"ContainerDied","Data":"8e3ff4e98aaf5add5735ae9178ae607df9d83bcedc8ce5cb78a31fc04a1960bf"} Oct 08 14:46:29 crc kubenswrapper[4624]: I1008 14:46:29.905241 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7d9jt" event={"ID":"1a9aaea6-b976-4954-9a1a-775893138c39","Type":"ContainerStarted","Data":"de1aca08a7a93403790a6277503d57e63c1a2cb956c8da3a987c8ad662b843b8"} Oct 08 14:46:30 crc kubenswrapper[4624]: I1008 14:46:30.539788 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0a30b9e8-eac9-4cc7-9197-190a5fea5638" containerName="rabbitmq" containerID="cri-o://60bf635cbd5fe7b60863dd55b8e2af9566a17fb9ecb6f5709c68f522870b3e6c" gracePeriod=604795 Oct 08 14:46:30 crc kubenswrapper[4624]: I1008 14:46:30.964986 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0a30b9e8-eac9-4cc7-9197-190a5fea5638" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Oct 08 14:46:31 crc kubenswrapper[4624]: I1008 14:46:31.517493 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="a5b83d38-1c23-4c71-8629-1ce512ce32f3" containerName="rabbitmq" containerID="cri-o://9cb2f1e16867aebef2367bd37fdc9ad8faada672dc1208e483441dc47f6a72fc" gracePeriod=604795 Oct 08 14:46:34 crc kubenswrapper[4624]: I1008 14:46:34.949866 4624 generic.go:334] "Generic (PLEG): container finished" podID="1a9aaea6-b976-4954-9a1a-775893138c39" containerID="de1aca08a7a93403790a6277503d57e63c1a2cb956c8da3a987c8ad662b843b8" exitCode=0 Oct 08 14:46:34 crc kubenswrapper[4624]: I1008 14:46:34.949978 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7d9jt" event={"ID":"1a9aaea6-b976-4954-9a1a-775893138c39","Type":"ContainerDied","Data":"de1aca08a7a93403790a6277503d57e63c1a2cb956c8da3a987c8ad662b843b8"} Oct 08 14:46:35 crc kubenswrapper[4624]: I1008 14:46:35.197225 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kv7qq"] Oct 08 14:46:35 crc kubenswrapper[4624]: I1008 14:46:35.199805 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kv7qq" Oct 08 14:46:35 crc kubenswrapper[4624]: I1008 14:46:35.221053 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kv7qq"] Oct 08 14:46:35 crc kubenswrapper[4624]: I1008 14:46:35.351810 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a3fcdba-db7b-4440-bc15-e3fc0bde2938-utilities\") pod \"redhat-marketplace-kv7qq\" (UID: \"1a3fcdba-db7b-4440-bc15-e3fc0bde2938\") " pod="openshift-marketplace/redhat-marketplace-kv7qq" Oct 08 14:46:35 crc kubenswrapper[4624]: I1008 14:46:35.352938 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a3fcdba-db7b-4440-bc15-e3fc0bde2938-catalog-content\") pod \"redhat-marketplace-kv7qq\" (UID: \"1a3fcdba-db7b-4440-bc15-e3fc0bde2938\") " pod="openshift-marketplace/redhat-marketplace-kv7qq" Oct 08 14:46:35 crc kubenswrapper[4624]: I1008 14:46:35.354086 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rjf8\" (UniqueName: \"kubernetes.io/projected/1a3fcdba-db7b-4440-bc15-e3fc0bde2938-kube-api-access-6rjf8\") pod \"redhat-marketplace-kv7qq\" (UID: \"1a3fcdba-db7b-4440-bc15-e3fc0bde2938\") " pod="openshift-marketplace/redhat-marketplace-kv7qq" Oct 08 14:46:35 crc kubenswrapper[4624]: I1008 14:46:35.455401 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rjf8\" (UniqueName: \"kubernetes.io/projected/1a3fcdba-db7b-4440-bc15-e3fc0bde2938-kube-api-access-6rjf8\") pod \"redhat-marketplace-kv7qq\" (UID: \"1a3fcdba-db7b-4440-bc15-e3fc0bde2938\") " pod="openshift-marketplace/redhat-marketplace-kv7qq" Oct 08 14:46:35 crc kubenswrapper[4624]: I1008 14:46:35.455519 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a3fcdba-db7b-4440-bc15-e3fc0bde2938-utilities\") pod \"redhat-marketplace-kv7qq\" (UID: \"1a3fcdba-db7b-4440-bc15-e3fc0bde2938\") " pod="openshift-marketplace/redhat-marketplace-kv7qq" Oct 08 14:46:35 crc kubenswrapper[4624]: I1008 14:46:35.455600 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a3fcdba-db7b-4440-bc15-e3fc0bde2938-catalog-content\") pod \"redhat-marketplace-kv7qq\" (UID: \"1a3fcdba-db7b-4440-bc15-e3fc0bde2938\") " pod="openshift-marketplace/redhat-marketplace-kv7qq" Oct 08 14:46:35 crc kubenswrapper[4624]: I1008 14:46:35.456022 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a3fcdba-db7b-4440-bc15-e3fc0bde2938-utilities\") pod \"redhat-marketplace-kv7qq\" (UID: \"1a3fcdba-db7b-4440-bc15-e3fc0bde2938\") " pod="openshift-marketplace/redhat-marketplace-kv7qq" Oct 08 14:46:35 crc kubenswrapper[4624]: I1008 14:46:35.456092 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a3fcdba-db7b-4440-bc15-e3fc0bde2938-catalog-content\") pod \"redhat-marketplace-kv7qq\" (UID: \"1a3fcdba-db7b-4440-bc15-e3fc0bde2938\") " pod="openshift-marketplace/redhat-marketplace-kv7qq" Oct 08 14:46:35 crc kubenswrapper[4624]: I1008 14:46:35.550917 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rjf8\" (UniqueName: \"kubernetes.io/projected/1a3fcdba-db7b-4440-bc15-e3fc0bde2938-kube-api-access-6rjf8\") pod \"redhat-marketplace-kv7qq\" (UID: \"1a3fcdba-db7b-4440-bc15-e3fc0bde2938\") " pod="openshift-marketplace/redhat-marketplace-kv7qq" Oct 08 14:46:35 crc kubenswrapper[4624]: I1008 14:46:35.820657 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kv7qq" Oct 08 14:46:35 crc kubenswrapper[4624]: I1008 14:46:35.969943 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7d9jt" event={"ID":"1a9aaea6-b976-4954-9a1a-775893138c39","Type":"ContainerStarted","Data":"d0a07da0d6b1966ef58541e9fa281db09d6d8ba480ba4aa8097b5c4226f499c0"} Oct 08 14:46:35 crc kubenswrapper[4624]: I1008 14:46:35.997320 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7d9jt" podStartSLOduration=3.318184213 podStartE2EDuration="9.997299282s" podCreationTimestamp="2025-10-08 14:46:26 +0000 UTC" firstStartedPulling="2025-10-08 14:46:28.892424093 +0000 UTC m=+1414.043359170" lastFinishedPulling="2025-10-08 14:46:35.571539162 +0000 UTC m=+1420.722474239" observedRunningTime="2025-10-08 14:46:35.991374672 +0000 UTC m=+1421.142309769" watchObservedRunningTime="2025-10-08 14:46:35.997299282 +0000 UTC m=+1421.148234359" Oct 08 14:46:36 crc kubenswrapper[4624]: I1008 14:46:36.301344 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kv7qq"] Oct 08 14:46:36 crc kubenswrapper[4624]: I1008 14:46:36.993047 4624 generic.go:334] "Generic (PLEG): container finished" podID="0a30b9e8-eac9-4cc7-9197-190a5fea5638" containerID="60bf635cbd5fe7b60863dd55b8e2af9566a17fb9ecb6f5709c68f522870b3e6c" exitCode=0 Oct 08 14:46:36 crc kubenswrapper[4624]: I1008 14:46:36.993361 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0a30b9e8-eac9-4cc7-9197-190a5fea5638","Type":"ContainerDied","Data":"60bf635cbd5fe7b60863dd55b8e2af9566a17fb9ecb6f5709c68f522870b3e6c"} Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.003689 4624 generic.go:334] "Generic (PLEG): container finished" podID="1a3fcdba-db7b-4440-bc15-e3fc0bde2938" containerID="ab3e99c249a29fddd7b7320f3e5c844dd5c07487517fc63a863960b0336bfd84" exitCode=0 Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.003738 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv7qq" event={"ID":"1a3fcdba-db7b-4440-bc15-e3fc0bde2938","Type":"ContainerDied","Data":"ab3e99c249a29fddd7b7320f3e5c844dd5c07487517fc63a863960b0336bfd84"} Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.003769 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv7qq" event={"ID":"1a3fcdba-db7b-4440-bc15-e3fc0bde2938","Type":"ContainerStarted","Data":"69d4cb202f840e52c162d6e8973b9c6c8afd2f6b67f66553ed0f2b5aeeafe354"} Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.025991 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7d9jt" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.031555 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7d9jt" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.161476 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.197736 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsdbr\" (UniqueName: \"kubernetes.io/projected/0a30b9e8-eac9-4cc7-9197-190a5fea5638-kube-api-access-gsdbr\") pod \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.197828 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0a30b9e8-eac9-4cc7-9197-190a5fea5638-server-conf\") pod \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.197855 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.197956 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-confd\") pod \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.197991 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0a30b9e8-eac9-4cc7-9197-190a5fea5638-plugins-conf\") pod \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.198025 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0a30b9e8-eac9-4cc7-9197-190a5fea5638-pod-info\") pod \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.198043 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a30b9e8-eac9-4cc7-9197-190a5fea5638-config-data\") pod \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.198093 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-erlang-cookie\") pod \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.198121 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0a30b9e8-eac9-4cc7-9197-190a5fea5638-erlang-cookie-secret\") pod \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.198156 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-plugins\") pod \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.198189 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-tls\") pod \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\" (UID: \"0a30b9e8-eac9-4cc7-9197-190a5fea5638\") " Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.206310 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0a30b9e8-eac9-4cc7-9197-190a5fea5638" (UID: "0a30b9e8-eac9-4cc7-9197-190a5fea5638"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.207096 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0a30b9e8-eac9-4cc7-9197-190a5fea5638" (UID: "0a30b9e8-eac9-4cc7-9197-190a5fea5638"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.207316 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "0a30b9e8-eac9-4cc7-9197-190a5fea5638" (UID: "0a30b9e8-eac9-4cc7-9197-190a5fea5638"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.211222 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0a30b9e8-eac9-4cc7-9197-190a5fea5638" (UID: "0a30b9e8-eac9-4cc7-9197-190a5fea5638"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.211877 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a30b9e8-eac9-4cc7-9197-190a5fea5638-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0a30b9e8-eac9-4cc7-9197-190a5fea5638" (UID: "0a30b9e8-eac9-4cc7-9197-190a5fea5638"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.216934 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a30b9e8-eac9-4cc7-9197-190a5fea5638-kube-api-access-gsdbr" (OuterVolumeSpecName: "kube-api-access-gsdbr") pod "0a30b9e8-eac9-4cc7-9197-190a5fea5638" (UID: "0a30b9e8-eac9-4cc7-9197-190a5fea5638"). InnerVolumeSpecName "kube-api-access-gsdbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.218051 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0a30b9e8-eac9-4cc7-9197-190a5fea5638-pod-info" (OuterVolumeSpecName: "pod-info") pod "0a30b9e8-eac9-4cc7-9197-190a5fea5638" (UID: "0a30b9e8-eac9-4cc7-9197-190a5fea5638"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.219430 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a30b9e8-eac9-4cc7-9197-190a5fea5638-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0a30b9e8-eac9-4cc7-9197-190a5fea5638" (UID: "0a30b9e8-eac9-4cc7-9197-190a5fea5638"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.279268 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a30b9e8-eac9-4cc7-9197-190a5fea5638-server-conf" (OuterVolumeSpecName: "server-conf") pod "0a30b9e8-eac9-4cc7-9197-190a5fea5638" (UID: "0a30b9e8-eac9-4cc7-9197-190a5fea5638"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.296082 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a30b9e8-eac9-4cc7-9197-190a5fea5638-config-data" (OuterVolumeSpecName: "config-data") pod "0a30b9e8-eac9-4cc7-9197-190a5fea5638" (UID: "0a30b9e8-eac9-4cc7-9197-190a5fea5638"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.300157 4624 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.300194 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsdbr\" (UniqueName: \"kubernetes.io/projected/0a30b9e8-eac9-4cc7-9197-190a5fea5638-kube-api-access-gsdbr\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.300209 4624 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0a30b9e8-eac9-4cc7-9197-190a5fea5638-server-conf\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.300234 4624 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.300250 4624 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0a30b9e8-eac9-4cc7-9197-190a5fea5638-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.300260 4624 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0a30b9e8-eac9-4cc7-9197-190a5fea5638-pod-info\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.300270 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a30b9e8-eac9-4cc7-9197-190a5fea5638-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.300330 4624 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.300345 4624 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0a30b9e8-eac9-4cc7-9197-190a5fea5638-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.300356 4624 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.357804 4624 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.374930 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0a30b9e8-eac9-4cc7-9197-190a5fea5638" (UID: "0a30b9e8-eac9-4cc7-9197-190a5fea5638"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.401971 4624 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0a30b9e8-eac9-4cc7-9197-190a5fea5638-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:37 crc kubenswrapper[4624]: I1008 14:46:37.402006 4624 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.059543 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0a30b9e8-eac9-4cc7-9197-190a5fea5638","Type":"ContainerDied","Data":"4f778da0b42c3f06308ee181849c35da55e1e528d349c274d394e81e308083bf"} Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.059895 4624 scope.go:117] "RemoveContainer" containerID="60bf635cbd5fe7b60863dd55b8e2af9566a17fb9ecb6f5709c68f522870b3e6c" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.059587 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.064578 4624 generic.go:334] "Generic (PLEG): container finished" podID="a5b83d38-1c23-4c71-8629-1ce512ce32f3" containerID="9cb2f1e16867aebef2367bd37fdc9ad8faada672dc1208e483441dc47f6a72fc" exitCode=0 Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.064671 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5b83d38-1c23-4c71-8629-1ce512ce32f3","Type":"ContainerDied","Data":"9cb2f1e16867aebef2367bd37fdc9ad8faada672dc1208e483441dc47f6a72fc"} Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.113480 4624 scope.go:117] "RemoveContainer" containerID="79675596171e4a7b726ded143df7062a2e6e50f40c24cb3569e705f14551b601" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.113606 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7d9jt" podUID="1a9aaea6-b976-4954-9a1a-775893138c39" containerName="registry-server" probeResult="failure" output=< Oct 08 14:46:38 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 14:46:38 crc kubenswrapper[4624]: > Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.117421 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.144932 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.170523 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 14:46:38 crc kubenswrapper[4624]: E1008 14:46:38.189509 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a30b9e8-eac9-4cc7-9197-190a5fea5638" containerName="setup-container" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.189546 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a30b9e8-eac9-4cc7-9197-190a5fea5638" containerName="setup-container" Oct 08 14:46:38 crc kubenswrapper[4624]: E1008 14:46:38.189602 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a30b9e8-eac9-4cc7-9197-190a5fea5638" containerName="rabbitmq" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.189612 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a30b9e8-eac9-4cc7-9197-190a5fea5638" containerName="rabbitmq" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.189928 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a30b9e8-eac9-4cc7-9197-190a5fea5638" containerName="rabbitmq" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.191367 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.194550 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.194782 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.195753 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7sc5j" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.195836 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.195887 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.196080 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.197938 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.223503 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.307632 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.318588 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9da75813-8748-41a3-8bea-bc7987ccc7a5-config-data\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.318724 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.318761 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9da75813-8748-41a3-8bea-bc7987ccc7a5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.318813 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9da75813-8748-41a3-8bea-bc7987ccc7a5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.318865 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9da75813-8748-41a3-8bea-bc7987ccc7a5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.318910 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9da75813-8748-41a3-8bea-bc7987ccc7a5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.318932 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9da75813-8748-41a3-8bea-bc7987ccc7a5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.319090 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9da75813-8748-41a3-8bea-bc7987ccc7a5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.319141 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9da75813-8748-41a3-8bea-bc7987ccc7a5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.319167 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cwvk\" (UniqueName: \"kubernetes.io/projected/9da75813-8748-41a3-8bea-bc7987ccc7a5-kube-api-access-6cwvk\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.319200 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9da75813-8748-41a3-8bea-bc7987ccc7a5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.420534 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5b83d38-1c23-4c71-8629-1ce512ce32f3-server-conf\") pod \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.420649 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-plugins\") pod \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.420723 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-tls\") pod \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.420842 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5b83d38-1c23-4c71-8629-1ce512ce32f3-config-data\") pod \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.420882 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-confd\") pod \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.420909 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5b83d38-1c23-4c71-8629-1ce512ce32f3-pod-info\") pod \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.421011 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5b83d38-1c23-4c71-8629-1ce512ce32f3-plugins-conf\") pod \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.421041 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.421079 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5b83d38-1c23-4c71-8629-1ce512ce32f3-erlang-cookie-secret\") pod \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.421121 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-erlang-cookie\") pod \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.421187 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pz2f\" (UniqueName: \"kubernetes.io/projected/a5b83d38-1c23-4c71-8629-1ce512ce32f3-kube-api-access-9pz2f\") pod \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\" (UID: \"a5b83d38-1c23-4c71-8629-1ce512ce32f3\") " Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.421794 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9da75813-8748-41a3-8bea-bc7987ccc7a5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.421958 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9da75813-8748-41a3-8bea-bc7987ccc7a5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.422019 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9da75813-8748-41a3-8bea-bc7987ccc7a5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.422090 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9da75813-8748-41a3-8bea-bc7987ccc7a5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.422104 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9da75813-8748-41a3-8bea-bc7987ccc7a5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.422190 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9da75813-8748-41a3-8bea-bc7987ccc7a5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.422251 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9da75813-8748-41a3-8bea-bc7987ccc7a5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.422292 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cwvk\" (UniqueName: \"kubernetes.io/projected/9da75813-8748-41a3-8bea-bc7987ccc7a5-kube-api-access-6cwvk\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.422325 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9da75813-8748-41a3-8bea-bc7987ccc7a5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.422391 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9da75813-8748-41a3-8bea-bc7987ccc7a5-config-data\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.422425 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.422685 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.437333 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9da75813-8748-41a3-8bea-bc7987ccc7a5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.437727 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9da75813-8748-41a3-8bea-bc7987ccc7a5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.440094 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9da75813-8748-41a3-8bea-bc7987ccc7a5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.440113 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9da75813-8748-41a3-8bea-bc7987ccc7a5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.441668 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9da75813-8748-41a3-8bea-bc7987ccc7a5-config-data\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.449405 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5b83d38-1c23-4c71-8629-1ce512ce32f3-kube-api-access-9pz2f" (OuterVolumeSpecName: "kube-api-access-9pz2f") pod "a5b83d38-1c23-4c71-8629-1ce512ce32f3" (UID: "a5b83d38-1c23-4c71-8629-1ce512ce32f3"). InnerVolumeSpecName "kube-api-access-9pz2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.450938 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5b83d38-1c23-4c71-8629-1ce512ce32f3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a5b83d38-1c23-4c71-8629-1ce512ce32f3" (UID: "a5b83d38-1c23-4c71-8629-1ce512ce32f3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.452499 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a5b83d38-1c23-4c71-8629-1ce512ce32f3" (UID: "a5b83d38-1c23-4c71-8629-1ce512ce32f3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.452602 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a5b83d38-1c23-4c71-8629-1ce512ce32f3" (UID: "a5b83d38-1c23-4c71-8629-1ce512ce32f3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.463779 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9da75813-8748-41a3-8bea-bc7987ccc7a5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.463996 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "a5b83d38-1c23-4c71-8629-1ce512ce32f3" (UID: "a5b83d38-1c23-4c71-8629-1ce512ce32f3"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.464542 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9da75813-8748-41a3-8bea-bc7987ccc7a5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.465461 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a5b83d38-1c23-4c71-8629-1ce512ce32f3-pod-info" (OuterVolumeSpecName: "pod-info") pod "a5b83d38-1c23-4c71-8629-1ce512ce32f3" (UID: "a5b83d38-1c23-4c71-8629-1ce512ce32f3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.466478 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a5b83d38-1c23-4c71-8629-1ce512ce32f3" (UID: "a5b83d38-1c23-4c71-8629-1ce512ce32f3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.470803 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5b83d38-1c23-4c71-8629-1ce512ce32f3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a5b83d38-1c23-4c71-8629-1ce512ce32f3" (UID: "a5b83d38-1c23-4c71-8629-1ce512ce32f3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.473311 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9da75813-8748-41a3-8bea-bc7987ccc7a5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.474194 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9da75813-8748-41a3-8bea-bc7987ccc7a5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.488157 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cwvk\" (UniqueName: \"kubernetes.io/projected/9da75813-8748-41a3-8bea-bc7987ccc7a5-kube-api-access-6cwvk\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.509377 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5b83d38-1c23-4c71-8629-1ce512ce32f3-config-data" (OuterVolumeSpecName: "config-data") pod "a5b83d38-1c23-4c71-8629-1ce512ce32f3" (UID: "a5b83d38-1c23-4c71-8629-1ce512ce32f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.525444 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5b83d38-1c23-4c71-8629-1ce512ce32f3-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.525769 4624 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5b83d38-1c23-4c71-8629-1ce512ce32f3-pod-info\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.525919 4624 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.526066 4624 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5b83d38-1c23-4c71-8629-1ce512ce32f3-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.526147 4624 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5b83d38-1c23-4c71-8629-1ce512ce32f3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.526218 4624 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.526294 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pz2f\" (UniqueName: \"kubernetes.io/projected/a5b83d38-1c23-4c71-8629-1ce512ce32f3-kube-api-access-9pz2f\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.526377 4624 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.526451 4624 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.576752 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9da75813-8748-41a3-8bea-bc7987ccc7a5\") " pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.586818 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5b83d38-1c23-4c71-8629-1ce512ce32f3-server-conf" (OuterVolumeSpecName: "server-conf") pod "a5b83d38-1c23-4c71-8629-1ce512ce32f3" (UID: "a5b83d38-1c23-4c71-8629-1ce512ce32f3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.614690 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.618155 4624 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.632388 4624 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.632425 4624 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5b83d38-1c23-4c71-8629-1ce512ce32f3-server-conf\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.696666 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a5b83d38-1c23-4c71-8629-1ce512ce32f3" (UID: "a5b83d38-1c23-4c71-8629-1ce512ce32f3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.738228 4624 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5b83d38-1c23-4c71-8629-1ce512ce32f3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.758111 4624 scope.go:117] "RemoveContainer" containerID="9cb2f1e16867aebef2367bd37fdc9ad8faada672dc1208e483441dc47f6a72fc" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.820464 4624 scope.go:117] "RemoveContainer" containerID="386ea90a8c50cfec2dd3992a5e652a3eca289efa6de448db18b24b337cc9f503" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.909407 4624 scope.go:117] "RemoveContainer" containerID="e4310411121c32c4933e1fa96560ab8608953e013ea1775c32d84e18c8aa56f3" Oct 08 14:46:38 crc kubenswrapper[4624]: I1008 14:46:38.971150 4624 scope.go:117] "RemoveContainer" containerID="0eb3fe154cf0d81a34eeae0dfabc8556d75b0ab1fc4c823d61450e6eb03828f1" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.085659 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv7qq" event={"ID":"1a3fcdba-db7b-4440-bc15-e3fc0bde2938","Type":"ContainerStarted","Data":"f41fd92ab4d88e1e3b7f801bffafbc349b4984c260265956d887dac2bb2fe32c"} Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.086743 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5b83d38-1c23-4c71-8629-1ce512ce32f3","Type":"ContainerDied","Data":"ff0f60d0d061628a028c39ad7ddf493e6b191a9050f3b5bb9999ab53dcfd3a3b"} Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.086756 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.161347 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.182536 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 14:46:39 crc kubenswrapper[4624]: W1008 14:46:39.198650 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9da75813_8748_41a3_8bea_bc7987ccc7a5.slice/crio-f80f2c5b59214133dcd929385ebce2ed81103ac13f41550f55a564adf52281db WatchSource:0}: Error finding container f80f2c5b59214133dcd929385ebce2ed81103ac13f41550f55a564adf52281db: Status 404 returned error can't find the container with id f80f2c5b59214133dcd929385ebce2ed81103ac13f41550f55a564adf52281db Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.201290 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 14:46:39 crc kubenswrapper[4624]: E1008 14:46:39.201729 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5b83d38-1c23-4c71-8629-1ce512ce32f3" containerName="rabbitmq" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.201742 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b83d38-1c23-4c71-8629-1ce512ce32f3" containerName="rabbitmq" Oct 08 14:46:39 crc kubenswrapper[4624]: E1008 14:46:39.201757 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5b83d38-1c23-4c71-8629-1ce512ce32f3" containerName="setup-container" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.201763 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b83d38-1c23-4c71-8629-1ce512ce32f3" containerName="setup-container" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.202007 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5b83d38-1c23-4c71-8629-1ce512ce32f3" containerName="rabbitmq" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.202938 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.207475 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.207634 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.207890 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.207976 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.208122 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.208241 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.208357 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fqz6q" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.230598 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.296284 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.366785 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0795aa07-68f2-4a23-b388-1237f212f537-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.366866 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0795aa07-68f2-4a23-b388-1237f212f537-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.366928 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgdfj\" (UniqueName: \"kubernetes.io/projected/0795aa07-68f2-4a23-b388-1237f212f537-kube-api-access-zgdfj\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.366987 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0795aa07-68f2-4a23-b388-1237f212f537-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.367021 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0795aa07-68f2-4a23-b388-1237f212f537-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.367090 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0795aa07-68f2-4a23-b388-1237f212f537-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.367114 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0795aa07-68f2-4a23-b388-1237f212f537-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.367189 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.367229 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0795aa07-68f2-4a23-b388-1237f212f537-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.367263 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0795aa07-68f2-4a23-b388-1237f212f537-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.367292 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0795aa07-68f2-4a23-b388-1237f212f537-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.469030 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.469094 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0795aa07-68f2-4a23-b388-1237f212f537-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.469129 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0795aa07-68f2-4a23-b388-1237f212f537-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.469154 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0795aa07-68f2-4a23-b388-1237f212f537-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.469236 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0795aa07-68f2-4a23-b388-1237f212f537-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.469274 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0795aa07-68f2-4a23-b388-1237f212f537-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.469321 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgdfj\" (UniqueName: \"kubernetes.io/projected/0795aa07-68f2-4a23-b388-1237f212f537-kube-api-access-zgdfj\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.469371 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0795aa07-68f2-4a23-b388-1237f212f537-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.469402 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0795aa07-68f2-4a23-b388-1237f212f537-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.469440 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0795aa07-68f2-4a23-b388-1237f212f537-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.469461 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0795aa07-68f2-4a23-b388-1237f212f537-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.470811 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0795aa07-68f2-4a23-b388-1237f212f537-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.471346 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0795aa07-68f2-4a23-b388-1237f212f537-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.471467 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.471597 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0795aa07-68f2-4a23-b388-1237f212f537-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.472038 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0795aa07-68f2-4a23-b388-1237f212f537-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.472779 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0795aa07-68f2-4a23-b388-1237f212f537-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.486617 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0795aa07-68f2-4a23-b388-1237f212f537-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.487388 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0795aa07-68f2-4a23-b388-1237f212f537-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.491358 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0795aa07-68f2-4a23-b388-1237f212f537-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.499082 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgdfj\" (UniqueName: \"kubernetes.io/projected/0795aa07-68f2-4a23-b388-1237f212f537-kube-api-access-zgdfj\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.499521 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0795aa07-68f2-4a23-b388-1237f212f537-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.501415 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a30b9e8-eac9-4cc7-9197-190a5fea5638" path="/var/lib/kubelet/pods/0a30b9e8-eac9-4cc7-9197-190a5fea5638/volumes" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.504689 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5b83d38-1c23-4c71-8629-1ce512ce32f3" path="/var/lib/kubelet/pods/a5b83d38-1c23-4c71-8629-1ce512ce32f3/volumes" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.525304 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0795aa07-68f2-4a23-b388-1237f212f537\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:39 crc kubenswrapper[4624]: I1008 14:46:39.542961 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:46:40 crc kubenswrapper[4624]: I1008 14:46:40.036290 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 14:46:40 crc kubenswrapper[4624]: I1008 14:46:40.106215 4624 generic.go:334] "Generic (PLEG): container finished" podID="1a3fcdba-db7b-4440-bc15-e3fc0bde2938" containerID="f41fd92ab4d88e1e3b7f801bffafbc349b4984c260265956d887dac2bb2fe32c" exitCode=0 Oct 08 14:46:40 crc kubenswrapper[4624]: I1008 14:46:40.106304 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv7qq" event={"ID":"1a3fcdba-db7b-4440-bc15-e3fc0bde2938","Type":"ContainerDied","Data":"f41fd92ab4d88e1e3b7f801bffafbc349b4984c260265956d887dac2bb2fe32c"} Oct 08 14:46:40 crc kubenswrapper[4624]: I1008 14:46:40.110457 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9da75813-8748-41a3-8bea-bc7987ccc7a5","Type":"ContainerStarted","Data":"f80f2c5b59214133dcd929385ebce2ed81103ac13f41550f55a564adf52281db"} Oct 08 14:46:40 crc kubenswrapper[4624]: I1008 14:46:40.114965 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0795aa07-68f2-4a23-b388-1237f212f537","Type":"ContainerStarted","Data":"57fb66f0ac6e3bb187f1b445767676d000b2dbf80ae5e62b3f5a0e128c596803"} Oct 08 14:46:41 crc kubenswrapper[4624]: I1008 14:46:41.126932 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv7qq" event={"ID":"1a3fcdba-db7b-4440-bc15-e3fc0bde2938","Type":"ContainerStarted","Data":"6594e5c821b57b7607b61284dced86eaba7e3a0b4846e686f2cb206bb1a7fc16"} Oct 08 14:46:41 crc kubenswrapper[4624]: I1008 14:46:41.128646 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9da75813-8748-41a3-8bea-bc7987ccc7a5","Type":"ContainerStarted","Data":"20275c4bd5c4320a318a0f4fe2c53f3664c7ec69869c0589c902f2ff0df5f116"} Oct 08 14:46:41 crc kubenswrapper[4624]: I1008 14:46:41.180949 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kv7qq" podStartSLOduration=2.521833165 podStartE2EDuration="6.180927192s" podCreationTimestamp="2025-10-08 14:46:35 +0000 UTC" firstStartedPulling="2025-10-08 14:46:37.008837159 +0000 UTC m=+1422.159772236" lastFinishedPulling="2025-10-08 14:46:40.667931186 +0000 UTC m=+1425.818866263" observedRunningTime="2025-10-08 14:46:41.152023431 +0000 UTC m=+1426.302958508" watchObservedRunningTime="2025-10-08 14:46:41.180927192 +0000 UTC m=+1426.331862269" Oct 08 14:46:42 crc kubenswrapper[4624]: I1008 14:46:42.137333 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0795aa07-68f2-4a23-b388-1237f212f537","Type":"ContainerStarted","Data":"22793abde43b4c4ae603f5c393dceaa9d03a075405bab40bb20587dc9dd1d330"} Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.326236 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86988d97b7-ldt5m"] Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.328506 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.336375 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86988d97b7-ldt5m"] Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.338216 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.475508 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-ovsdbserver-sb\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.475572 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-dns-swift-storage-0\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.475827 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdphg\" (UniqueName: \"kubernetes.io/projected/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-kube-api-access-rdphg\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.475909 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-config\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.475949 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-dns-svc\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.476116 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-openstack-edpm-ipam\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.476158 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-ovsdbserver-nb\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.578288 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-ovsdbserver-sb\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.578382 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-dns-swift-storage-0\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.578505 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdphg\" (UniqueName: \"kubernetes.io/projected/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-kube-api-access-rdphg\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.578548 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-config\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.578577 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-dns-svc\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.578669 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-openstack-edpm-ipam\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.578692 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-ovsdbserver-nb\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.579764 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-dns-swift-storage-0\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.580137 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-dns-svc\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.580350 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-config\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.580380 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-openstack-edpm-ipam\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.580537 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-ovsdbserver-nb\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.580654 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-ovsdbserver-sb\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.607006 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdphg\" (UniqueName: \"kubernetes.io/projected/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-kube-api-access-rdphg\") pod \"dnsmasq-dns-86988d97b7-ldt5m\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:44 crc kubenswrapper[4624]: I1008 14:46:44.650014 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:45 crc kubenswrapper[4624]: I1008 14:46:45.220997 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86988d97b7-ldt5m"] Oct 08 14:46:45 crc kubenswrapper[4624]: W1008 14:46:45.232748 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda65b13f8_9d61_4485_a6c9_022dd1edbeb3.slice/crio-0c026b382b4a2f1944a7c396e84f61659d06802b63760e6f55803b334a71a5a0 WatchSource:0}: Error finding container 0c026b382b4a2f1944a7c396e84f61659d06802b63760e6f55803b334a71a5a0: Status 404 returned error can't find the container with id 0c026b382b4a2f1944a7c396e84f61659d06802b63760e6f55803b334a71a5a0 Oct 08 14:46:45 crc kubenswrapper[4624]: I1008 14:46:45.281687 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-plktb"] Oct 08 14:46:45 crc kubenswrapper[4624]: I1008 14:46:45.283870 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plktb" Oct 08 14:46:45 crc kubenswrapper[4624]: I1008 14:46:45.332186 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-plktb"] Oct 08 14:46:45 crc kubenswrapper[4624]: I1008 14:46:45.419386 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttxph\" (UniqueName: \"kubernetes.io/projected/572cc372-dc71-4d72-a0c6-539521c7d3aa-kube-api-access-ttxph\") pod \"community-operators-plktb\" (UID: \"572cc372-dc71-4d72-a0c6-539521c7d3aa\") " pod="openshift-marketplace/community-operators-plktb" Oct 08 14:46:45 crc kubenswrapper[4624]: I1008 14:46:45.419438 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572cc372-dc71-4d72-a0c6-539521c7d3aa-catalog-content\") pod \"community-operators-plktb\" (UID: \"572cc372-dc71-4d72-a0c6-539521c7d3aa\") " pod="openshift-marketplace/community-operators-plktb" Oct 08 14:46:45 crc kubenswrapper[4624]: I1008 14:46:45.419546 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572cc372-dc71-4d72-a0c6-539521c7d3aa-utilities\") pod \"community-operators-plktb\" (UID: \"572cc372-dc71-4d72-a0c6-539521c7d3aa\") " pod="openshift-marketplace/community-operators-plktb" Oct 08 14:46:45 crc kubenswrapper[4624]: I1008 14:46:45.522067 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572cc372-dc71-4d72-a0c6-539521c7d3aa-utilities\") pod \"community-operators-plktb\" (UID: \"572cc372-dc71-4d72-a0c6-539521c7d3aa\") " pod="openshift-marketplace/community-operators-plktb" Oct 08 14:46:45 crc kubenswrapper[4624]: I1008 14:46:45.522774 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttxph\" (UniqueName: \"kubernetes.io/projected/572cc372-dc71-4d72-a0c6-539521c7d3aa-kube-api-access-ttxph\") pod \"community-operators-plktb\" (UID: \"572cc372-dc71-4d72-a0c6-539521c7d3aa\") " pod="openshift-marketplace/community-operators-plktb" Oct 08 14:46:45 crc kubenswrapper[4624]: I1008 14:46:45.522808 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572cc372-dc71-4d72-a0c6-539521c7d3aa-catalog-content\") pod \"community-operators-plktb\" (UID: \"572cc372-dc71-4d72-a0c6-539521c7d3aa\") " pod="openshift-marketplace/community-operators-plktb" Oct 08 14:46:45 crc kubenswrapper[4624]: I1008 14:46:45.522895 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572cc372-dc71-4d72-a0c6-539521c7d3aa-utilities\") pod \"community-operators-plktb\" (UID: \"572cc372-dc71-4d72-a0c6-539521c7d3aa\") " pod="openshift-marketplace/community-operators-plktb" Oct 08 14:46:45 crc kubenswrapper[4624]: I1008 14:46:45.524502 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572cc372-dc71-4d72-a0c6-539521c7d3aa-catalog-content\") pod \"community-operators-plktb\" (UID: \"572cc372-dc71-4d72-a0c6-539521c7d3aa\") " pod="openshift-marketplace/community-operators-plktb" Oct 08 14:46:45 crc kubenswrapper[4624]: I1008 14:46:45.543899 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttxph\" (UniqueName: \"kubernetes.io/projected/572cc372-dc71-4d72-a0c6-539521c7d3aa-kube-api-access-ttxph\") pod \"community-operators-plktb\" (UID: \"572cc372-dc71-4d72-a0c6-539521c7d3aa\") " pod="openshift-marketplace/community-operators-plktb" Oct 08 14:46:45 crc kubenswrapper[4624]: I1008 14:46:45.656217 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plktb" Oct 08 14:46:45 crc kubenswrapper[4624]: I1008 14:46:45.825950 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kv7qq" Oct 08 14:46:45 crc kubenswrapper[4624]: I1008 14:46:45.826965 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kv7qq" Oct 08 14:46:45 crc kubenswrapper[4624]: I1008 14:46:45.998507 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kv7qq" Oct 08 14:46:46 crc kubenswrapper[4624]: I1008 14:46:46.182591 4624 generic.go:334] "Generic (PLEG): container finished" podID="a65b13f8-9d61-4485-a6c9-022dd1edbeb3" containerID="1add3885ebacccc2ce0c403a83f760a54275ea284cb764c2e2166bf68b5f67e1" exitCode=0 Oct 08 14:46:46 crc kubenswrapper[4624]: I1008 14:46:46.182673 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" event={"ID":"a65b13f8-9d61-4485-a6c9-022dd1edbeb3","Type":"ContainerDied","Data":"1add3885ebacccc2ce0c403a83f760a54275ea284cb764c2e2166bf68b5f67e1"} Oct 08 14:46:46 crc kubenswrapper[4624]: I1008 14:46:46.182723 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" event={"ID":"a65b13f8-9d61-4485-a6c9-022dd1edbeb3","Type":"ContainerStarted","Data":"0c026b382b4a2f1944a7c396e84f61659d06802b63760e6f55803b334a71a5a0"} Oct 08 14:46:46 crc kubenswrapper[4624]: I1008 14:46:46.283824 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kv7qq" Oct 08 14:46:46 crc kubenswrapper[4624]: I1008 14:46:46.373323 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-plktb"] Oct 08 14:46:46 crc kubenswrapper[4624]: W1008 14:46:46.377818 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod572cc372_dc71_4d72_a0c6_539521c7d3aa.slice/crio-a485680cb97da83f181ec69328c8a66e34613e1f8d1bd3c552a4b57bdb421674 WatchSource:0}: Error finding container a485680cb97da83f181ec69328c8a66e34613e1f8d1bd3c552a4b57bdb421674: Status 404 returned error can't find the container with id a485680cb97da83f181ec69328c8a66e34613e1f8d1bd3c552a4b57bdb421674 Oct 08 14:46:47 crc kubenswrapper[4624]: I1008 14:46:47.194755 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" event={"ID":"a65b13f8-9d61-4485-a6c9-022dd1edbeb3","Type":"ContainerStarted","Data":"64969733e541cbd5dcb0c40a2dcd5fde6753886a2d8564a29ce27030b43a1e6d"} Oct 08 14:46:47 crc kubenswrapper[4624]: I1008 14:46:47.195156 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:47 crc kubenswrapper[4624]: I1008 14:46:47.196808 4624 generic.go:334] "Generic (PLEG): container finished" podID="572cc372-dc71-4d72-a0c6-539521c7d3aa" containerID="242554f20ab4835667b31d75fc0e7e8e3bbd9a1ffd4ef6904e24342b3623b7e7" exitCode=0 Oct 08 14:46:47 crc kubenswrapper[4624]: I1008 14:46:47.196992 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plktb" event={"ID":"572cc372-dc71-4d72-a0c6-539521c7d3aa","Type":"ContainerDied","Data":"242554f20ab4835667b31d75fc0e7e8e3bbd9a1ffd4ef6904e24342b3623b7e7"} Oct 08 14:46:47 crc kubenswrapper[4624]: I1008 14:46:47.197131 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plktb" event={"ID":"572cc372-dc71-4d72-a0c6-539521c7d3aa","Type":"ContainerStarted","Data":"a485680cb97da83f181ec69328c8a66e34613e1f8d1bd3c552a4b57bdb421674"} Oct 08 14:46:47 crc kubenswrapper[4624]: I1008 14:46:47.224103 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" podStartSLOduration=3.224082953 podStartE2EDuration="3.224082953s" podCreationTimestamp="2025-10-08 14:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:46:47.219124848 +0000 UTC m=+1432.370059915" watchObservedRunningTime="2025-10-08 14:46:47.224082953 +0000 UTC m=+1432.375018030" Oct 08 14:46:48 crc kubenswrapper[4624]: I1008 14:46:48.077152 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7d9jt" podUID="1a9aaea6-b976-4954-9a1a-775893138c39" containerName="registry-server" probeResult="failure" output=< Oct 08 14:46:48 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 14:46:48 crc kubenswrapper[4624]: > Oct 08 14:46:48 crc kubenswrapper[4624]: I1008 14:46:48.211832 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plktb" event={"ID":"572cc372-dc71-4d72-a0c6-539521c7d3aa","Type":"ContainerStarted","Data":"9d087f56fb7f941e22e950a7268a85f121a146302180006156e8404f6777d0ec"} Oct 08 14:46:48 crc kubenswrapper[4624]: I1008 14:46:48.428025 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kv7qq"] Oct 08 14:46:49 crc kubenswrapper[4624]: I1008 14:46:49.219799 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kv7qq" podUID="1a3fcdba-db7b-4440-bc15-e3fc0bde2938" containerName="registry-server" containerID="cri-o://6594e5c821b57b7607b61284dced86eaba7e3a0b4846e686f2cb206bb1a7fc16" gracePeriod=2 Oct 08 14:46:49 crc kubenswrapper[4624]: I1008 14:46:49.681083 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kv7qq" Oct 08 14:46:49 crc kubenswrapper[4624]: I1008 14:46:49.833340 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a3fcdba-db7b-4440-bc15-e3fc0bde2938-utilities\") pod \"1a3fcdba-db7b-4440-bc15-e3fc0bde2938\" (UID: \"1a3fcdba-db7b-4440-bc15-e3fc0bde2938\") " Oct 08 14:46:49 crc kubenswrapper[4624]: I1008 14:46:49.833406 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a3fcdba-db7b-4440-bc15-e3fc0bde2938-catalog-content\") pod \"1a3fcdba-db7b-4440-bc15-e3fc0bde2938\" (UID: \"1a3fcdba-db7b-4440-bc15-e3fc0bde2938\") " Oct 08 14:46:49 crc kubenswrapper[4624]: I1008 14:46:49.833673 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rjf8\" (UniqueName: \"kubernetes.io/projected/1a3fcdba-db7b-4440-bc15-e3fc0bde2938-kube-api-access-6rjf8\") pod \"1a3fcdba-db7b-4440-bc15-e3fc0bde2938\" (UID: \"1a3fcdba-db7b-4440-bc15-e3fc0bde2938\") " Oct 08 14:46:49 crc kubenswrapper[4624]: I1008 14:46:49.834191 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a3fcdba-db7b-4440-bc15-e3fc0bde2938-utilities" (OuterVolumeSpecName: "utilities") pod "1a3fcdba-db7b-4440-bc15-e3fc0bde2938" (UID: "1a3fcdba-db7b-4440-bc15-e3fc0bde2938"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:46:49 crc kubenswrapper[4624]: I1008 14:46:49.844887 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a3fcdba-db7b-4440-bc15-e3fc0bde2938-kube-api-access-6rjf8" (OuterVolumeSpecName: "kube-api-access-6rjf8") pod "1a3fcdba-db7b-4440-bc15-e3fc0bde2938" (UID: "1a3fcdba-db7b-4440-bc15-e3fc0bde2938"). InnerVolumeSpecName "kube-api-access-6rjf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:46:49 crc kubenswrapper[4624]: I1008 14:46:49.846952 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a3fcdba-db7b-4440-bc15-e3fc0bde2938-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a3fcdba-db7b-4440-bc15-e3fc0bde2938" (UID: "1a3fcdba-db7b-4440-bc15-e3fc0bde2938"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:46:49 crc kubenswrapper[4624]: I1008 14:46:49.935480 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rjf8\" (UniqueName: \"kubernetes.io/projected/1a3fcdba-db7b-4440-bc15-e3fc0bde2938-kube-api-access-6rjf8\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:49 crc kubenswrapper[4624]: I1008 14:46:49.935514 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a3fcdba-db7b-4440-bc15-e3fc0bde2938-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:49 crc kubenswrapper[4624]: I1008 14:46:49.935524 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a3fcdba-db7b-4440-bc15-e3fc0bde2938-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:50 crc kubenswrapper[4624]: I1008 14:46:50.229715 4624 generic.go:334] "Generic (PLEG): container finished" podID="1a3fcdba-db7b-4440-bc15-e3fc0bde2938" containerID="6594e5c821b57b7607b61284dced86eaba7e3a0b4846e686f2cb206bb1a7fc16" exitCode=0 Oct 08 14:46:50 crc kubenswrapper[4624]: I1008 14:46:50.229774 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv7qq" event={"ID":"1a3fcdba-db7b-4440-bc15-e3fc0bde2938","Type":"ContainerDied","Data":"6594e5c821b57b7607b61284dced86eaba7e3a0b4846e686f2cb206bb1a7fc16"} Oct 08 14:46:50 crc kubenswrapper[4624]: I1008 14:46:50.229801 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv7qq" event={"ID":"1a3fcdba-db7b-4440-bc15-e3fc0bde2938","Type":"ContainerDied","Data":"69d4cb202f840e52c162d6e8973b9c6c8afd2f6b67f66553ed0f2b5aeeafe354"} Oct 08 14:46:50 crc kubenswrapper[4624]: I1008 14:46:50.229817 4624 scope.go:117] "RemoveContainer" containerID="6594e5c821b57b7607b61284dced86eaba7e3a0b4846e686f2cb206bb1a7fc16" Oct 08 14:46:50 crc kubenswrapper[4624]: I1008 14:46:50.229936 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kv7qq" Oct 08 14:46:50 crc kubenswrapper[4624]: I1008 14:46:50.233121 4624 generic.go:334] "Generic (PLEG): container finished" podID="572cc372-dc71-4d72-a0c6-539521c7d3aa" containerID="9d087f56fb7f941e22e950a7268a85f121a146302180006156e8404f6777d0ec" exitCode=0 Oct 08 14:46:50 crc kubenswrapper[4624]: I1008 14:46:50.233155 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plktb" event={"ID":"572cc372-dc71-4d72-a0c6-539521c7d3aa","Type":"ContainerDied","Data":"9d087f56fb7f941e22e950a7268a85f121a146302180006156e8404f6777d0ec"} Oct 08 14:46:50 crc kubenswrapper[4624]: I1008 14:46:50.263769 4624 scope.go:117] "RemoveContainer" containerID="f41fd92ab4d88e1e3b7f801bffafbc349b4984c260265956d887dac2bb2fe32c" Oct 08 14:46:50 crc kubenswrapper[4624]: I1008 14:46:50.294657 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kv7qq"] Oct 08 14:46:50 crc kubenswrapper[4624]: I1008 14:46:50.298105 4624 scope.go:117] "RemoveContainer" containerID="ab3e99c249a29fddd7b7320f3e5c844dd5c07487517fc63a863960b0336bfd84" Oct 08 14:46:50 crc kubenswrapper[4624]: I1008 14:46:50.308389 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kv7qq"] Oct 08 14:46:50 crc kubenswrapper[4624]: I1008 14:46:50.346143 4624 scope.go:117] "RemoveContainer" containerID="6594e5c821b57b7607b61284dced86eaba7e3a0b4846e686f2cb206bb1a7fc16" Oct 08 14:46:50 crc kubenswrapper[4624]: E1008 14:46:50.346583 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6594e5c821b57b7607b61284dced86eaba7e3a0b4846e686f2cb206bb1a7fc16\": container with ID starting with 6594e5c821b57b7607b61284dced86eaba7e3a0b4846e686f2cb206bb1a7fc16 not found: ID does not exist" containerID="6594e5c821b57b7607b61284dced86eaba7e3a0b4846e686f2cb206bb1a7fc16" Oct 08 14:46:50 crc kubenswrapper[4624]: I1008 14:46:50.346618 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6594e5c821b57b7607b61284dced86eaba7e3a0b4846e686f2cb206bb1a7fc16"} err="failed to get container status \"6594e5c821b57b7607b61284dced86eaba7e3a0b4846e686f2cb206bb1a7fc16\": rpc error: code = NotFound desc = could not find container \"6594e5c821b57b7607b61284dced86eaba7e3a0b4846e686f2cb206bb1a7fc16\": container with ID starting with 6594e5c821b57b7607b61284dced86eaba7e3a0b4846e686f2cb206bb1a7fc16 not found: ID does not exist" Oct 08 14:46:50 crc kubenswrapper[4624]: I1008 14:46:50.346653 4624 scope.go:117] "RemoveContainer" containerID="f41fd92ab4d88e1e3b7f801bffafbc349b4984c260265956d887dac2bb2fe32c" Oct 08 14:46:50 crc kubenswrapper[4624]: E1008 14:46:50.347283 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f41fd92ab4d88e1e3b7f801bffafbc349b4984c260265956d887dac2bb2fe32c\": container with ID starting with f41fd92ab4d88e1e3b7f801bffafbc349b4984c260265956d887dac2bb2fe32c not found: ID does not exist" containerID="f41fd92ab4d88e1e3b7f801bffafbc349b4984c260265956d887dac2bb2fe32c" Oct 08 14:46:50 crc kubenswrapper[4624]: I1008 14:46:50.347309 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f41fd92ab4d88e1e3b7f801bffafbc349b4984c260265956d887dac2bb2fe32c"} err="failed to get container status \"f41fd92ab4d88e1e3b7f801bffafbc349b4984c260265956d887dac2bb2fe32c\": rpc error: code = NotFound desc = could not find container \"f41fd92ab4d88e1e3b7f801bffafbc349b4984c260265956d887dac2bb2fe32c\": container with ID starting with f41fd92ab4d88e1e3b7f801bffafbc349b4984c260265956d887dac2bb2fe32c not found: ID does not exist" Oct 08 14:46:50 crc kubenswrapper[4624]: I1008 14:46:50.347325 4624 scope.go:117] "RemoveContainer" containerID="ab3e99c249a29fddd7b7320f3e5c844dd5c07487517fc63a863960b0336bfd84" Oct 08 14:46:50 crc kubenswrapper[4624]: E1008 14:46:50.347568 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab3e99c249a29fddd7b7320f3e5c844dd5c07487517fc63a863960b0336bfd84\": container with ID starting with ab3e99c249a29fddd7b7320f3e5c844dd5c07487517fc63a863960b0336bfd84 not found: ID does not exist" containerID="ab3e99c249a29fddd7b7320f3e5c844dd5c07487517fc63a863960b0336bfd84" Oct 08 14:46:50 crc kubenswrapper[4624]: I1008 14:46:50.347596 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3e99c249a29fddd7b7320f3e5c844dd5c07487517fc63a863960b0336bfd84"} err="failed to get container status \"ab3e99c249a29fddd7b7320f3e5c844dd5c07487517fc63a863960b0336bfd84\": rpc error: code = NotFound desc = could not find container \"ab3e99c249a29fddd7b7320f3e5c844dd5c07487517fc63a863960b0336bfd84\": container with ID starting with ab3e99c249a29fddd7b7320f3e5c844dd5c07487517fc63a863960b0336bfd84 not found: ID does not exist" Oct 08 14:46:51 crc kubenswrapper[4624]: I1008 14:46:51.246414 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plktb" event={"ID":"572cc372-dc71-4d72-a0c6-539521c7d3aa","Type":"ContainerStarted","Data":"e23d5f29f9684b6197eec5e5a83b8bba602aabfa3134fa3543f2f761e628cf94"} Oct 08 14:46:51 crc kubenswrapper[4624]: I1008 14:46:51.268034 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-plktb" podStartSLOduration=2.73594298 podStartE2EDuration="6.268016405s" podCreationTimestamp="2025-10-08 14:46:45 +0000 UTC" firstStartedPulling="2025-10-08 14:46:47.198547607 +0000 UTC m=+1432.349482684" lastFinishedPulling="2025-10-08 14:46:50.730621032 +0000 UTC m=+1435.881556109" observedRunningTime="2025-10-08 14:46:51.267661926 +0000 UTC m=+1436.418597003" watchObservedRunningTime="2025-10-08 14:46:51.268016405 +0000 UTC m=+1436.418951482" Oct 08 14:46:51 crc kubenswrapper[4624]: I1008 14:46:51.477523 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a3fcdba-db7b-4440-bc15-e3fc0bde2938" path="/var/lib/kubelet/pods/1a3fcdba-db7b-4440-bc15-e3fc0bde2938/volumes" Oct 08 14:46:54 crc kubenswrapper[4624]: I1008 14:46:54.650803 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:46:54 crc kubenswrapper[4624]: I1008 14:46:54.727164 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c4cbc559-vt9mm"] Oct 08 14:46:54 crc kubenswrapper[4624]: I1008 14:46:54.727990 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" podUID="c0519d2a-5bda-421c-88fc-9319f3bc7a29" containerName="dnsmasq-dns" containerID="cri-o://6c3cdb2a488aebcb231c0997c619b39aa4b2f0157d93c7b1f214abcdd407d87b" gracePeriod=10 Oct 08 14:46:54 crc kubenswrapper[4624]: I1008 14:46:54.970075 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54695ff68c-h4xsn"] Oct 08 14:46:54 crc kubenswrapper[4624]: E1008 14:46:54.970757 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a3fcdba-db7b-4440-bc15-e3fc0bde2938" containerName="extract-content" Oct 08 14:46:54 crc kubenswrapper[4624]: I1008 14:46:54.970842 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a3fcdba-db7b-4440-bc15-e3fc0bde2938" containerName="extract-content" Oct 08 14:46:54 crc kubenswrapper[4624]: E1008 14:46:54.970919 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a3fcdba-db7b-4440-bc15-e3fc0bde2938" containerName="registry-server" Oct 08 14:46:54 crc kubenswrapper[4624]: I1008 14:46:54.970996 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a3fcdba-db7b-4440-bc15-e3fc0bde2938" containerName="registry-server" Oct 08 14:46:54 crc kubenswrapper[4624]: E1008 14:46:54.971070 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a3fcdba-db7b-4440-bc15-e3fc0bde2938" containerName="extract-utilities" Oct 08 14:46:54 crc kubenswrapper[4624]: I1008 14:46:54.971127 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a3fcdba-db7b-4440-bc15-e3fc0bde2938" containerName="extract-utilities" Oct 08 14:46:54 crc kubenswrapper[4624]: I1008 14:46:54.971386 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a3fcdba-db7b-4440-bc15-e3fc0bde2938" containerName="registry-server" Oct 08 14:46:54 crc kubenswrapper[4624]: I1008 14:46:54.972539 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:54 crc kubenswrapper[4624]: I1008 14:46:54.993020 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54695ff68c-h4xsn"] Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.131465 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-config\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.131837 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftmhp\" (UniqueName: \"kubernetes.io/projected/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-kube-api-access-ftmhp\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.131889 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-ovsdbserver-sb\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.131947 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-ovsdbserver-nb\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.131994 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-dns-svc\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.132014 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-dns-swift-storage-0\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.132082 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-openstack-edpm-ipam\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.237983 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-ovsdbserver-sb\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.238091 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-ovsdbserver-nb\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.238136 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-dns-svc\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.238156 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-dns-swift-storage-0\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.238238 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-openstack-edpm-ipam\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.238295 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-config\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.238322 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftmhp\" (UniqueName: \"kubernetes.io/projected/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-kube-api-access-ftmhp\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.239677 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-ovsdbserver-sb\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.240322 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-ovsdbserver-nb\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.240926 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-dns-svc\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.241502 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-dns-swift-storage-0\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.242016 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-openstack-edpm-ipam\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.242520 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-config\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.283653 4624 generic.go:334] "Generic (PLEG): container finished" podID="c0519d2a-5bda-421c-88fc-9319f3bc7a29" containerID="6c3cdb2a488aebcb231c0997c619b39aa4b2f0157d93c7b1f214abcdd407d87b" exitCode=0 Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.283697 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" event={"ID":"c0519d2a-5bda-421c-88fc-9319f3bc7a29","Type":"ContainerDied","Data":"6c3cdb2a488aebcb231c0997c619b39aa4b2f0157d93c7b1f214abcdd407d87b"} Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.302087 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftmhp\" (UniqueName: \"kubernetes.io/projected/ca4fad93-41d0-4e9a-8033-fa3ff1a14769-kube-api-access-ftmhp\") pod \"dnsmasq-dns-54695ff68c-h4xsn\" (UID: \"ca4fad93-41d0-4e9a-8033-fa3ff1a14769\") " pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.373620 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.555750 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-ovsdbserver-sb\") pod \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.555806 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-ovsdbserver-nb\") pod \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.555882 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-dns-svc\") pod \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.555934 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqh78\" (UniqueName: \"kubernetes.io/projected/c0519d2a-5bda-421c-88fc-9319f3bc7a29-kube-api-access-tqh78\") pod \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.555974 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-config\") pod \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.556474 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-dns-swift-storage-0\") pod \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\" (UID: \"c0519d2a-5bda-421c-88fc-9319f3bc7a29\") " Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.578981 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0519d2a-5bda-421c-88fc-9319f3bc7a29-kube-api-access-tqh78" (OuterVolumeSpecName: "kube-api-access-tqh78") pod "c0519d2a-5bda-421c-88fc-9319f3bc7a29" (UID: "c0519d2a-5bda-421c-88fc-9319f3bc7a29"). InnerVolumeSpecName "kube-api-access-tqh78". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.592921 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.629011 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c0519d2a-5bda-421c-88fc-9319f3bc7a29" (UID: "c0519d2a-5bda-421c-88fc-9319f3bc7a29"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.657205 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c0519d2a-5bda-421c-88fc-9319f3bc7a29" (UID: "c0519d2a-5bda-421c-88fc-9319f3bc7a29"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.657505 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-plktb" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.657541 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-plktb" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.661066 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.661093 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.661104 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqh78\" (UniqueName: \"kubernetes.io/projected/c0519d2a-5bda-421c-88fc-9319f3bc7a29-kube-api-access-tqh78\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.667480 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c0519d2a-5bda-421c-88fc-9319f3bc7a29" (UID: "c0519d2a-5bda-421c-88fc-9319f3bc7a29"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.678010 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c0519d2a-5bda-421c-88fc-9319f3bc7a29" (UID: "c0519d2a-5bda-421c-88fc-9319f3bc7a29"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.697008 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-config" (OuterVolumeSpecName: "config") pod "c0519d2a-5bda-421c-88fc-9319f3bc7a29" (UID: "c0519d2a-5bda-421c-88fc-9319f3bc7a29"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.762759 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.763090 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:55 crc kubenswrapper[4624]: I1008 14:46:55.763104 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0519d2a-5bda-421c-88fc-9319f3bc7a29-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:56 crc kubenswrapper[4624]: I1008 14:46:56.108371 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54695ff68c-h4xsn"] Oct 08 14:46:56 crc kubenswrapper[4624]: I1008 14:46:56.300456 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" event={"ID":"ca4fad93-41d0-4e9a-8033-fa3ff1a14769","Type":"ContainerStarted","Data":"21de09fe25d2eb115beea20332e56d791dda58b1465fe966a5d61be0d4a34dd7"} Oct 08 14:46:56 crc kubenswrapper[4624]: I1008 14:46:56.323294 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" event={"ID":"c0519d2a-5bda-421c-88fc-9319f3bc7a29","Type":"ContainerDied","Data":"8b1f31d966bd97ade870e03c54ff10b827cb5213dcef104f3ea88611b5d850ee"} Oct 08 14:46:56 crc kubenswrapper[4624]: I1008 14:46:56.323370 4624 scope.go:117] "RemoveContainer" containerID="6c3cdb2a488aebcb231c0997c619b39aa4b2f0157d93c7b1f214abcdd407d87b" Oct 08 14:46:56 crc kubenswrapper[4624]: I1008 14:46:56.323464 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c4cbc559-vt9mm" Oct 08 14:46:56 crc kubenswrapper[4624]: I1008 14:46:56.359933 4624 scope.go:117] "RemoveContainer" containerID="4a981783a37bb350c851b68660731a09fd0a8c5a1b98d4214e06432b767f2c4f" Oct 08 14:46:56 crc kubenswrapper[4624]: I1008 14:46:56.396079 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c4cbc559-vt9mm"] Oct 08 14:46:56 crc kubenswrapper[4624]: I1008 14:46:56.426033 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59c4cbc559-vt9mm"] Oct 08 14:46:56 crc kubenswrapper[4624]: I1008 14:46:56.766833 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-plktb" podUID="572cc372-dc71-4d72-a0c6-539521c7d3aa" containerName="registry-server" probeResult="failure" output=< Oct 08 14:46:56 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 14:46:56 crc kubenswrapper[4624]: > Oct 08 14:46:57 crc kubenswrapper[4624]: I1008 14:46:57.080549 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7d9jt" Oct 08 14:46:57 crc kubenswrapper[4624]: I1008 14:46:57.136734 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7d9jt" Oct 08 14:46:57 crc kubenswrapper[4624]: I1008 14:46:57.332130 4624 generic.go:334] "Generic (PLEG): container finished" podID="ca4fad93-41d0-4e9a-8033-fa3ff1a14769" containerID="35c20826c562ba75d46338b1cd4e3d7541f2deeccee7a4b6ff77b396b7f22299" exitCode=0 Oct 08 14:46:57 crc kubenswrapper[4624]: I1008 14:46:57.332187 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" event={"ID":"ca4fad93-41d0-4e9a-8033-fa3ff1a14769","Type":"ContainerDied","Data":"35c20826c562ba75d46338b1cd4e3d7541f2deeccee7a4b6ff77b396b7f22299"} Oct 08 14:46:57 crc kubenswrapper[4624]: I1008 14:46:57.477302 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0519d2a-5bda-421c-88fc-9319f3bc7a29" path="/var/lib/kubelet/pods/c0519d2a-5bda-421c-88fc-9319f3bc7a29/volumes" Oct 08 14:46:57 crc kubenswrapper[4624]: I1008 14:46:57.950815 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7d9jt"] Oct 08 14:46:58 crc kubenswrapper[4624]: I1008 14:46:58.345518 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" event={"ID":"ca4fad93-41d0-4e9a-8033-fa3ff1a14769","Type":"ContainerStarted","Data":"cebf87066ec5ff60165e8a4cf381f1a77b8699f556bc2bf99127f20dd6f382ef"} Oct 08 14:46:58 crc kubenswrapper[4624]: I1008 14:46:58.345653 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7d9jt" podUID="1a9aaea6-b976-4954-9a1a-775893138c39" containerName="registry-server" containerID="cri-o://d0a07da0d6b1966ef58541e9fa281db09d6d8ba480ba4aa8097b5c4226f499c0" gracePeriod=2 Oct 08 14:46:58 crc kubenswrapper[4624]: I1008 14:46:58.369557 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" podStartSLOduration=4.369542019 podStartE2EDuration="4.369542019s" podCreationTimestamp="2025-10-08 14:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:46:58.367237201 +0000 UTC m=+1443.518172278" watchObservedRunningTime="2025-10-08 14:46:58.369542019 +0000 UTC m=+1443.520477096" Oct 08 14:46:58 crc kubenswrapper[4624]: I1008 14:46:58.842190 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7d9jt" Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.024968 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx7gf\" (UniqueName: \"kubernetes.io/projected/1a9aaea6-b976-4954-9a1a-775893138c39-kube-api-access-hx7gf\") pod \"1a9aaea6-b976-4954-9a1a-775893138c39\" (UID: \"1a9aaea6-b976-4954-9a1a-775893138c39\") " Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.025079 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9aaea6-b976-4954-9a1a-775893138c39-utilities\") pod \"1a9aaea6-b976-4954-9a1a-775893138c39\" (UID: \"1a9aaea6-b976-4954-9a1a-775893138c39\") " Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.025128 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9aaea6-b976-4954-9a1a-775893138c39-catalog-content\") pod \"1a9aaea6-b976-4954-9a1a-775893138c39\" (UID: \"1a9aaea6-b976-4954-9a1a-775893138c39\") " Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.025765 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a9aaea6-b976-4954-9a1a-775893138c39-utilities" (OuterVolumeSpecName: "utilities") pod "1a9aaea6-b976-4954-9a1a-775893138c39" (UID: "1a9aaea6-b976-4954-9a1a-775893138c39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.032447 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a9aaea6-b976-4954-9a1a-775893138c39-kube-api-access-hx7gf" (OuterVolumeSpecName: "kube-api-access-hx7gf") pod "1a9aaea6-b976-4954-9a1a-775893138c39" (UID: "1a9aaea6-b976-4954-9a1a-775893138c39"). InnerVolumeSpecName "kube-api-access-hx7gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.099688 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a9aaea6-b976-4954-9a1a-775893138c39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a9aaea6-b976-4954-9a1a-775893138c39" (UID: "1a9aaea6-b976-4954-9a1a-775893138c39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.127739 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx7gf\" (UniqueName: \"kubernetes.io/projected/1a9aaea6-b976-4954-9a1a-775893138c39-kube-api-access-hx7gf\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.127809 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9aaea6-b976-4954-9a1a-775893138c39-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.127825 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9aaea6-b976-4954-9a1a-775893138c39-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.357694 4624 generic.go:334] "Generic (PLEG): container finished" podID="1a9aaea6-b976-4954-9a1a-775893138c39" containerID="d0a07da0d6b1966ef58541e9fa281db09d6d8ba480ba4aa8097b5c4226f499c0" exitCode=0 Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.357794 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7d9jt" event={"ID":"1a9aaea6-b976-4954-9a1a-775893138c39","Type":"ContainerDied","Data":"d0a07da0d6b1966ef58541e9fa281db09d6d8ba480ba4aa8097b5c4226f499c0"} Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.357833 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7d9jt" event={"ID":"1a9aaea6-b976-4954-9a1a-775893138c39","Type":"ContainerDied","Data":"de838b4b78c243e7e2ae0ba017f99ec1106ccb69f03abe754ef4b2a84465c8d3"} Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.357853 4624 scope.go:117] "RemoveContainer" containerID="d0a07da0d6b1966ef58541e9fa281db09d6d8ba480ba4aa8097b5c4226f499c0" Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.357888 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7d9jt" Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.358071 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.378414 4624 scope.go:117] "RemoveContainer" containerID="de1aca08a7a93403790a6277503d57e63c1a2cb956c8da3a987c8ad662b843b8" Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.397466 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7d9jt"] Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.405629 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7d9jt"] Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.419144 4624 scope.go:117] "RemoveContainer" containerID="8e3ff4e98aaf5add5735ae9178ae607df9d83bcedc8ce5cb78a31fc04a1960bf" Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.459207 4624 scope.go:117] "RemoveContainer" containerID="d0a07da0d6b1966ef58541e9fa281db09d6d8ba480ba4aa8097b5c4226f499c0" Oct 08 14:46:59 crc kubenswrapper[4624]: E1008 14:46:59.459517 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0a07da0d6b1966ef58541e9fa281db09d6d8ba480ba4aa8097b5c4226f499c0\": container with ID starting with d0a07da0d6b1966ef58541e9fa281db09d6d8ba480ba4aa8097b5c4226f499c0 not found: ID does not exist" containerID="d0a07da0d6b1966ef58541e9fa281db09d6d8ba480ba4aa8097b5c4226f499c0" Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.459551 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a07da0d6b1966ef58541e9fa281db09d6d8ba480ba4aa8097b5c4226f499c0"} err="failed to get container status \"d0a07da0d6b1966ef58541e9fa281db09d6d8ba480ba4aa8097b5c4226f499c0\": rpc error: code = NotFound desc = could not find container \"d0a07da0d6b1966ef58541e9fa281db09d6d8ba480ba4aa8097b5c4226f499c0\": container with ID starting with d0a07da0d6b1966ef58541e9fa281db09d6d8ba480ba4aa8097b5c4226f499c0 not found: ID does not exist" Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.459573 4624 scope.go:117] "RemoveContainer" containerID="de1aca08a7a93403790a6277503d57e63c1a2cb956c8da3a987c8ad662b843b8" Oct 08 14:46:59 crc kubenswrapper[4624]: E1008 14:46:59.459866 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de1aca08a7a93403790a6277503d57e63c1a2cb956c8da3a987c8ad662b843b8\": container with ID starting with de1aca08a7a93403790a6277503d57e63c1a2cb956c8da3a987c8ad662b843b8 not found: ID does not exist" containerID="de1aca08a7a93403790a6277503d57e63c1a2cb956c8da3a987c8ad662b843b8" Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.459894 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de1aca08a7a93403790a6277503d57e63c1a2cb956c8da3a987c8ad662b843b8"} err="failed to get container status \"de1aca08a7a93403790a6277503d57e63c1a2cb956c8da3a987c8ad662b843b8\": rpc error: code = NotFound desc = could not find container \"de1aca08a7a93403790a6277503d57e63c1a2cb956c8da3a987c8ad662b843b8\": container with ID starting with de1aca08a7a93403790a6277503d57e63c1a2cb956c8da3a987c8ad662b843b8 not found: ID does not exist" Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.459912 4624 scope.go:117] "RemoveContainer" containerID="8e3ff4e98aaf5add5735ae9178ae607df9d83bcedc8ce5cb78a31fc04a1960bf" Oct 08 14:46:59 crc kubenswrapper[4624]: E1008 14:46:59.460479 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e3ff4e98aaf5add5735ae9178ae607df9d83bcedc8ce5cb78a31fc04a1960bf\": container with ID starting with 8e3ff4e98aaf5add5735ae9178ae607df9d83bcedc8ce5cb78a31fc04a1960bf not found: ID does not exist" containerID="8e3ff4e98aaf5add5735ae9178ae607df9d83bcedc8ce5cb78a31fc04a1960bf" Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.460505 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e3ff4e98aaf5add5735ae9178ae607df9d83bcedc8ce5cb78a31fc04a1960bf"} err="failed to get container status \"8e3ff4e98aaf5add5735ae9178ae607df9d83bcedc8ce5cb78a31fc04a1960bf\": rpc error: code = NotFound desc = could not find container \"8e3ff4e98aaf5add5735ae9178ae607df9d83bcedc8ce5cb78a31fc04a1960bf\": container with ID starting with 8e3ff4e98aaf5add5735ae9178ae607df9d83bcedc8ce5cb78a31fc04a1960bf not found: ID does not exist" Oct 08 14:46:59 crc kubenswrapper[4624]: I1008 14:46:59.481251 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a9aaea6-b976-4954-9a1a-775893138c39" path="/var/lib/kubelet/pods/1a9aaea6-b976-4954-9a1a-775893138c39/volumes" Oct 08 14:47:05 crc kubenswrapper[4624]: I1008 14:47:05.595581 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54695ff68c-h4xsn" Oct 08 14:47:05 crc kubenswrapper[4624]: I1008 14:47:05.685385 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86988d97b7-ldt5m"] Oct 08 14:47:05 crc kubenswrapper[4624]: I1008 14:47:05.685759 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" podUID="a65b13f8-9d61-4485-a6c9-022dd1edbeb3" containerName="dnsmasq-dns" containerID="cri-o://64969733e541cbd5dcb0c40a2dcd5fde6753886a2d8564a29ce27030b43a1e6d" gracePeriod=10 Oct 08 14:47:05 crc kubenswrapper[4624]: I1008 14:47:05.788365 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-plktb" Oct 08 14:47:05 crc kubenswrapper[4624]: I1008 14:47:05.911062 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-plktb" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.292685 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.402420 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-plktb"] Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.432298 4624 generic.go:334] "Generic (PLEG): container finished" podID="a65b13f8-9d61-4485-a6c9-022dd1edbeb3" containerID="64969733e541cbd5dcb0c40a2dcd5fde6753886a2d8564a29ce27030b43a1e6d" exitCode=0 Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.432919 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.432918 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" event={"ID":"a65b13f8-9d61-4485-a6c9-022dd1edbeb3","Type":"ContainerDied","Data":"64969733e541cbd5dcb0c40a2dcd5fde6753886a2d8564a29ce27030b43a1e6d"} Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.433129 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86988d97b7-ldt5m" event={"ID":"a65b13f8-9d61-4485-a6c9-022dd1edbeb3","Type":"ContainerDied","Data":"0c026b382b4a2f1944a7c396e84f61659d06802b63760e6f55803b334a71a5a0"} Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.433153 4624 scope.go:117] "RemoveContainer" containerID="64969733e541cbd5dcb0c40a2dcd5fde6753886a2d8564a29ce27030b43a1e6d" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.455456 4624 scope.go:117] "RemoveContainer" containerID="1add3885ebacccc2ce0c403a83f760a54275ea284cb764c2e2166bf68b5f67e1" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.474840 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-ovsdbserver-nb\") pod \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.475092 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdphg\" (UniqueName: \"kubernetes.io/projected/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-kube-api-access-rdphg\") pod \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.475795 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-config\") pod \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.475836 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-openstack-edpm-ipam\") pod \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.475867 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-dns-swift-storage-0\") pod \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.475912 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-ovsdbserver-sb\") pod \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.476101 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-dns-svc\") pod \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\" (UID: \"a65b13f8-9d61-4485-a6c9-022dd1edbeb3\") " Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.482900 4624 scope.go:117] "RemoveContainer" containerID="64969733e541cbd5dcb0c40a2dcd5fde6753886a2d8564a29ce27030b43a1e6d" Oct 08 14:47:06 crc kubenswrapper[4624]: E1008 14:47:06.484004 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64969733e541cbd5dcb0c40a2dcd5fde6753886a2d8564a29ce27030b43a1e6d\": container with ID starting with 64969733e541cbd5dcb0c40a2dcd5fde6753886a2d8564a29ce27030b43a1e6d not found: ID does not exist" containerID="64969733e541cbd5dcb0c40a2dcd5fde6753886a2d8564a29ce27030b43a1e6d" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.484050 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64969733e541cbd5dcb0c40a2dcd5fde6753886a2d8564a29ce27030b43a1e6d"} err="failed to get container status \"64969733e541cbd5dcb0c40a2dcd5fde6753886a2d8564a29ce27030b43a1e6d\": rpc error: code = NotFound desc = could not find container \"64969733e541cbd5dcb0c40a2dcd5fde6753886a2d8564a29ce27030b43a1e6d\": container with ID starting with 64969733e541cbd5dcb0c40a2dcd5fde6753886a2d8564a29ce27030b43a1e6d not found: ID does not exist" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.484079 4624 scope.go:117] "RemoveContainer" containerID="1add3885ebacccc2ce0c403a83f760a54275ea284cb764c2e2166bf68b5f67e1" Oct 08 14:47:06 crc kubenswrapper[4624]: E1008 14:47:06.484555 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1add3885ebacccc2ce0c403a83f760a54275ea284cb764c2e2166bf68b5f67e1\": container with ID starting with 1add3885ebacccc2ce0c403a83f760a54275ea284cb764c2e2166bf68b5f67e1 not found: ID does not exist" containerID="1add3885ebacccc2ce0c403a83f760a54275ea284cb764c2e2166bf68b5f67e1" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.484710 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1add3885ebacccc2ce0c403a83f760a54275ea284cb764c2e2166bf68b5f67e1"} err="failed to get container status \"1add3885ebacccc2ce0c403a83f760a54275ea284cb764c2e2166bf68b5f67e1\": rpc error: code = NotFound desc = could not find container \"1add3885ebacccc2ce0c403a83f760a54275ea284cb764c2e2166bf68b5f67e1\": container with ID starting with 1add3885ebacccc2ce0c403a83f760a54275ea284cb764c2e2166bf68b5f67e1 not found: ID does not exist" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.488898 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-kube-api-access-rdphg" (OuterVolumeSpecName: "kube-api-access-rdphg") pod "a65b13f8-9d61-4485-a6c9-022dd1edbeb3" (UID: "a65b13f8-9d61-4485-a6c9-022dd1edbeb3"). InnerVolumeSpecName "kube-api-access-rdphg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.537192 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a65b13f8-9d61-4485-a6c9-022dd1edbeb3" (UID: "a65b13f8-9d61-4485-a6c9-022dd1edbeb3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.545527 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-config" (OuterVolumeSpecName: "config") pod "a65b13f8-9d61-4485-a6c9-022dd1edbeb3" (UID: "a65b13f8-9d61-4485-a6c9-022dd1edbeb3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.550626 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a65b13f8-9d61-4485-a6c9-022dd1edbeb3" (UID: "a65b13f8-9d61-4485-a6c9-022dd1edbeb3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.564390 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a65b13f8-9d61-4485-a6c9-022dd1edbeb3" (UID: "a65b13f8-9d61-4485-a6c9-022dd1edbeb3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.569046 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a65b13f8-9d61-4485-a6c9-022dd1edbeb3" (UID: "a65b13f8-9d61-4485-a6c9-022dd1edbeb3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.571068 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "a65b13f8-9d61-4485-a6c9-022dd1edbeb3" (UID: "a65b13f8-9d61-4485-a6c9-022dd1edbeb3"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.580475 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.580523 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdphg\" (UniqueName: \"kubernetes.io/projected/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-kube-api-access-rdphg\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.580541 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-config\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.580551 4624 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.580560 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.580570 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.580578 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a65b13f8-9d61-4485-a6c9-022dd1edbeb3-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.766531 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86988d97b7-ldt5m"] Oct 08 14:47:06 crc kubenswrapper[4624]: I1008 14:47:06.775951 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86988d97b7-ldt5m"] Oct 08 14:47:07 crc kubenswrapper[4624]: I1008 14:47:07.442351 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-plktb" podUID="572cc372-dc71-4d72-a0c6-539521c7d3aa" containerName="registry-server" containerID="cri-o://e23d5f29f9684b6197eec5e5a83b8bba602aabfa3134fa3543f2f761e628cf94" gracePeriod=2 Oct 08 14:47:07 crc kubenswrapper[4624]: I1008 14:47:07.477974 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a65b13f8-9d61-4485-a6c9-022dd1edbeb3" path="/var/lib/kubelet/pods/a65b13f8-9d61-4485-a6c9-022dd1edbeb3/volumes" Oct 08 14:47:07 crc kubenswrapper[4624]: I1008 14:47:07.915270 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plktb" Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.111572 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572cc372-dc71-4d72-a0c6-539521c7d3aa-catalog-content\") pod \"572cc372-dc71-4d72-a0c6-539521c7d3aa\" (UID: \"572cc372-dc71-4d72-a0c6-539521c7d3aa\") " Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.111787 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttxph\" (UniqueName: \"kubernetes.io/projected/572cc372-dc71-4d72-a0c6-539521c7d3aa-kube-api-access-ttxph\") pod \"572cc372-dc71-4d72-a0c6-539521c7d3aa\" (UID: \"572cc372-dc71-4d72-a0c6-539521c7d3aa\") " Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.111949 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572cc372-dc71-4d72-a0c6-539521c7d3aa-utilities\") pod \"572cc372-dc71-4d72-a0c6-539521c7d3aa\" (UID: \"572cc372-dc71-4d72-a0c6-539521c7d3aa\") " Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.113461 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/572cc372-dc71-4d72-a0c6-539521c7d3aa-utilities" (OuterVolumeSpecName: "utilities") pod "572cc372-dc71-4d72-a0c6-539521c7d3aa" (UID: "572cc372-dc71-4d72-a0c6-539521c7d3aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.146838 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/572cc372-dc71-4d72-a0c6-539521c7d3aa-kube-api-access-ttxph" (OuterVolumeSpecName: "kube-api-access-ttxph") pod "572cc372-dc71-4d72-a0c6-539521c7d3aa" (UID: "572cc372-dc71-4d72-a0c6-539521c7d3aa"). InnerVolumeSpecName "kube-api-access-ttxph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.175379 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/572cc372-dc71-4d72-a0c6-539521c7d3aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "572cc372-dc71-4d72-a0c6-539521c7d3aa" (UID: "572cc372-dc71-4d72-a0c6-539521c7d3aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.214414 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572cc372-dc71-4d72-a0c6-539521c7d3aa-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.214474 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572cc372-dc71-4d72-a0c6-539521c7d3aa-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.214489 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttxph\" (UniqueName: \"kubernetes.io/projected/572cc372-dc71-4d72-a0c6-539521c7d3aa-kube-api-access-ttxph\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.451936 4624 generic.go:334] "Generic (PLEG): container finished" podID="572cc372-dc71-4d72-a0c6-539521c7d3aa" containerID="e23d5f29f9684b6197eec5e5a83b8bba602aabfa3134fa3543f2f761e628cf94" exitCode=0 Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.451999 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plktb" event={"ID":"572cc372-dc71-4d72-a0c6-539521c7d3aa","Type":"ContainerDied","Data":"e23d5f29f9684b6197eec5e5a83b8bba602aabfa3134fa3543f2f761e628cf94"} Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.452050 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plktb" Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.452036 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plktb" event={"ID":"572cc372-dc71-4d72-a0c6-539521c7d3aa","Type":"ContainerDied","Data":"a485680cb97da83f181ec69328c8a66e34613e1f8d1bd3c552a4b57bdb421674"} Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.452088 4624 scope.go:117] "RemoveContainer" containerID="e23d5f29f9684b6197eec5e5a83b8bba602aabfa3134fa3543f2f761e628cf94" Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.483833 4624 scope.go:117] "RemoveContainer" containerID="9d087f56fb7f941e22e950a7268a85f121a146302180006156e8404f6777d0ec" Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.497307 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-plktb"] Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.506781 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-plktb"] Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.518459 4624 scope.go:117] "RemoveContainer" containerID="242554f20ab4835667b31d75fc0e7e8e3bbd9a1ffd4ef6904e24342b3623b7e7" Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.557901 4624 scope.go:117] "RemoveContainer" containerID="e23d5f29f9684b6197eec5e5a83b8bba602aabfa3134fa3543f2f761e628cf94" Oct 08 14:47:08 crc kubenswrapper[4624]: E1008 14:47:08.558395 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e23d5f29f9684b6197eec5e5a83b8bba602aabfa3134fa3543f2f761e628cf94\": container with ID starting with e23d5f29f9684b6197eec5e5a83b8bba602aabfa3134fa3543f2f761e628cf94 not found: ID does not exist" containerID="e23d5f29f9684b6197eec5e5a83b8bba602aabfa3134fa3543f2f761e628cf94" Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.558434 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23d5f29f9684b6197eec5e5a83b8bba602aabfa3134fa3543f2f761e628cf94"} err="failed to get container status \"e23d5f29f9684b6197eec5e5a83b8bba602aabfa3134fa3543f2f761e628cf94\": rpc error: code = NotFound desc = could not find container \"e23d5f29f9684b6197eec5e5a83b8bba602aabfa3134fa3543f2f761e628cf94\": container with ID starting with e23d5f29f9684b6197eec5e5a83b8bba602aabfa3134fa3543f2f761e628cf94 not found: ID does not exist" Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.558459 4624 scope.go:117] "RemoveContainer" containerID="9d087f56fb7f941e22e950a7268a85f121a146302180006156e8404f6777d0ec" Oct 08 14:47:08 crc kubenswrapper[4624]: E1008 14:47:08.558983 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d087f56fb7f941e22e950a7268a85f121a146302180006156e8404f6777d0ec\": container with ID starting with 9d087f56fb7f941e22e950a7268a85f121a146302180006156e8404f6777d0ec not found: ID does not exist" containerID="9d087f56fb7f941e22e950a7268a85f121a146302180006156e8404f6777d0ec" Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.559035 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d087f56fb7f941e22e950a7268a85f121a146302180006156e8404f6777d0ec"} err="failed to get container status \"9d087f56fb7f941e22e950a7268a85f121a146302180006156e8404f6777d0ec\": rpc error: code = NotFound desc = could not find container \"9d087f56fb7f941e22e950a7268a85f121a146302180006156e8404f6777d0ec\": container with ID starting with 9d087f56fb7f941e22e950a7268a85f121a146302180006156e8404f6777d0ec not found: ID does not exist" Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.559066 4624 scope.go:117] "RemoveContainer" containerID="242554f20ab4835667b31d75fc0e7e8e3bbd9a1ffd4ef6904e24342b3623b7e7" Oct 08 14:47:08 crc kubenswrapper[4624]: E1008 14:47:08.559450 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"242554f20ab4835667b31d75fc0e7e8e3bbd9a1ffd4ef6904e24342b3623b7e7\": container with ID starting with 242554f20ab4835667b31d75fc0e7e8e3bbd9a1ffd4ef6904e24342b3623b7e7 not found: ID does not exist" containerID="242554f20ab4835667b31d75fc0e7e8e3bbd9a1ffd4ef6904e24342b3623b7e7" Oct 08 14:47:08 crc kubenswrapper[4624]: I1008 14:47:08.559477 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"242554f20ab4835667b31d75fc0e7e8e3bbd9a1ffd4ef6904e24342b3623b7e7"} err="failed to get container status \"242554f20ab4835667b31d75fc0e7e8e3bbd9a1ffd4ef6904e24342b3623b7e7\": rpc error: code = NotFound desc = could not find container \"242554f20ab4835667b31d75fc0e7e8e3bbd9a1ffd4ef6904e24342b3623b7e7\": container with ID starting with 242554f20ab4835667b31d75fc0e7e8e3bbd9a1ffd4ef6904e24342b3623b7e7 not found: ID does not exist" Oct 08 14:47:09 crc kubenswrapper[4624]: I1008 14:47:09.476104 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="572cc372-dc71-4d72-a0c6-539521c7d3aa" path="/var/lib/kubelet/pods/572cc372-dc71-4d72-a0c6-539521c7d3aa/volumes" Oct 08 14:47:13 crc kubenswrapper[4624]: I1008 14:47:13.516346 4624 generic.go:334] "Generic (PLEG): container finished" podID="9da75813-8748-41a3-8bea-bc7987ccc7a5" containerID="20275c4bd5c4320a318a0f4fe2c53f3664c7ec69869c0589c902f2ff0df5f116" exitCode=0 Oct 08 14:47:13 crc kubenswrapper[4624]: I1008 14:47:13.516701 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9da75813-8748-41a3-8bea-bc7987ccc7a5","Type":"ContainerDied","Data":"20275c4bd5c4320a318a0f4fe2c53f3664c7ec69869c0589c902f2ff0df5f116"} Oct 08 14:47:14 crc kubenswrapper[4624]: I1008 14:47:14.527509 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9da75813-8748-41a3-8bea-bc7987ccc7a5","Type":"ContainerStarted","Data":"487e9b8294c99d786a719911ff0a9b67b038a606e91371eef3255825f2d41c27"} Oct 08 14:47:14 crc kubenswrapper[4624]: I1008 14:47:14.528057 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 08 14:47:14 crc kubenswrapper[4624]: I1008 14:47:14.530290 4624 generic.go:334] "Generic (PLEG): container finished" podID="0795aa07-68f2-4a23-b388-1237f212f537" containerID="22793abde43b4c4ae603f5c393dceaa9d03a075405bab40bb20587dc9dd1d330" exitCode=0 Oct 08 14:47:14 crc kubenswrapper[4624]: I1008 14:47:14.530324 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0795aa07-68f2-4a23-b388-1237f212f537","Type":"ContainerDied","Data":"22793abde43b4c4ae603f5c393dceaa9d03a075405bab40bb20587dc9dd1d330"} Oct 08 14:47:14 crc kubenswrapper[4624]: I1008 14:47:14.568319 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.568295137 podStartE2EDuration="36.568295137s" podCreationTimestamp="2025-10-08 14:46:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:47:14.561255919 +0000 UTC m=+1459.712190996" watchObservedRunningTime="2025-10-08 14:47:14.568295137 +0000 UTC m=+1459.719230224" Oct 08 14:47:15 crc kubenswrapper[4624]: I1008 14:47:15.541099 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0795aa07-68f2-4a23-b388-1237f212f537","Type":"ContainerStarted","Data":"27fe5f935e5ca16c252a4ad07457f36d78216c0c81f05c85a5c485a665325e28"} Oct 08 14:47:15 crc kubenswrapper[4624]: I1008 14:47:15.542369 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:47:15 crc kubenswrapper[4624]: I1008 14:47:15.583968 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.583944988 podStartE2EDuration="36.583944988s" podCreationTimestamp="2025-10-08 14:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:47:15.573663548 +0000 UTC m=+1460.724598625" watchObservedRunningTime="2025-10-08 14:47:15.583944988 +0000 UTC m=+1460.734880065" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.279175 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8"] Oct 08 14:47:24 crc kubenswrapper[4624]: E1008 14:47:24.280160 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0519d2a-5bda-421c-88fc-9319f3bc7a29" containerName="dnsmasq-dns" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.280174 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0519d2a-5bda-421c-88fc-9319f3bc7a29" containerName="dnsmasq-dns" Oct 08 14:47:24 crc kubenswrapper[4624]: E1008 14:47:24.280188 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9aaea6-b976-4954-9a1a-775893138c39" containerName="extract-content" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.280194 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9aaea6-b976-4954-9a1a-775893138c39" containerName="extract-content" Oct 08 14:47:24 crc kubenswrapper[4624]: E1008 14:47:24.280216 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572cc372-dc71-4d72-a0c6-539521c7d3aa" containerName="registry-server" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.280222 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="572cc372-dc71-4d72-a0c6-539521c7d3aa" containerName="registry-server" Oct 08 14:47:24 crc kubenswrapper[4624]: E1008 14:47:24.280241 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572cc372-dc71-4d72-a0c6-539521c7d3aa" containerName="extract-utilities" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.280246 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="572cc372-dc71-4d72-a0c6-539521c7d3aa" containerName="extract-utilities" Oct 08 14:47:24 crc kubenswrapper[4624]: E1008 14:47:24.280256 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9aaea6-b976-4954-9a1a-775893138c39" containerName="registry-server" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.280261 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9aaea6-b976-4954-9a1a-775893138c39" containerName="registry-server" Oct 08 14:47:24 crc kubenswrapper[4624]: E1008 14:47:24.280287 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572cc372-dc71-4d72-a0c6-539521c7d3aa" containerName="extract-content" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.280294 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="572cc372-dc71-4d72-a0c6-539521c7d3aa" containerName="extract-content" Oct 08 14:47:24 crc kubenswrapper[4624]: E1008 14:47:24.280302 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a65b13f8-9d61-4485-a6c9-022dd1edbeb3" containerName="init" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.280308 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a65b13f8-9d61-4485-a6c9-022dd1edbeb3" containerName="init" Oct 08 14:47:24 crc kubenswrapper[4624]: E1008 14:47:24.280319 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0519d2a-5bda-421c-88fc-9319f3bc7a29" containerName="init" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.280325 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0519d2a-5bda-421c-88fc-9319f3bc7a29" containerName="init" Oct 08 14:47:24 crc kubenswrapper[4624]: E1008 14:47:24.280337 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9aaea6-b976-4954-9a1a-775893138c39" containerName="extract-utilities" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.280344 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9aaea6-b976-4954-9a1a-775893138c39" containerName="extract-utilities" Oct 08 14:47:24 crc kubenswrapper[4624]: E1008 14:47:24.280352 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a65b13f8-9d61-4485-a6c9-022dd1edbeb3" containerName="dnsmasq-dns" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.280359 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a65b13f8-9d61-4485-a6c9-022dd1edbeb3" containerName="dnsmasq-dns" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.280552 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="572cc372-dc71-4d72-a0c6-539521c7d3aa" containerName="registry-server" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.280573 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a9aaea6-b976-4954-9a1a-775893138c39" containerName="registry-server" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.280580 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a65b13f8-9d61-4485-a6c9-022dd1edbeb3" containerName="dnsmasq-dns" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.280595 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0519d2a-5bda-421c-88fc-9319f3bc7a29" containerName="dnsmasq-dns" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.281194 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.283338 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.283502 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.283403 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v6sxt" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.284430 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.300900 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8"] Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.348077 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f05ed9-3980-46e8-96b7-ef08d01f09ee-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8\" (UID: \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.348593 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89f05ed9-3980-46e8-96b7-ef08d01f09ee-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8\" (UID: \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.348782 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdv26\" (UniqueName: \"kubernetes.io/projected/89f05ed9-3980-46e8-96b7-ef08d01f09ee-kube-api-access-gdv26\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8\" (UID: \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.348994 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89f05ed9-3980-46e8-96b7-ef08d01f09ee-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8\" (UID: \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.451349 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89f05ed9-3980-46e8-96b7-ef08d01f09ee-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8\" (UID: \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.451487 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f05ed9-3980-46e8-96b7-ef08d01f09ee-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8\" (UID: \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.451517 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89f05ed9-3980-46e8-96b7-ef08d01f09ee-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8\" (UID: \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.451583 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdv26\" (UniqueName: \"kubernetes.io/projected/89f05ed9-3980-46e8-96b7-ef08d01f09ee-kube-api-access-gdv26\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8\" (UID: \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.467208 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f05ed9-3980-46e8-96b7-ef08d01f09ee-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8\" (UID: \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.472337 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89f05ed9-3980-46e8-96b7-ef08d01f09ee-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8\" (UID: \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.473297 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89f05ed9-3980-46e8-96b7-ef08d01f09ee-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8\" (UID: \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.476405 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdv26\" (UniqueName: \"kubernetes.io/projected/89f05ed9-3980-46e8-96b7-ef08d01f09ee-kube-api-access-gdv26\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8\" (UID: \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" Oct 08 14:47:24 crc kubenswrapper[4624]: I1008 14:47:24.612864 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" Oct 08 14:47:25 crc kubenswrapper[4624]: I1008 14:47:25.390741 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8"] Oct 08 14:47:25 crc kubenswrapper[4624]: I1008 14:47:25.651614 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" event={"ID":"89f05ed9-3980-46e8-96b7-ef08d01f09ee","Type":"ContainerStarted","Data":"cae406dc4f79726fbc16dd4eaf3fc664dd499658e23f5df1531ab43c9d956c2d"} Oct 08 14:47:28 crc kubenswrapper[4624]: I1008 14:47:28.618918 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 08 14:47:29 crc kubenswrapper[4624]: I1008 14:47:29.546118 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 08 14:47:36 crc kubenswrapper[4624]: I1008 14:47:36.759972 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" event={"ID":"89f05ed9-3980-46e8-96b7-ef08d01f09ee","Type":"ContainerStarted","Data":"bad0d1d6e976ee2491a9a98928ca4839428ca897edabfc78de82dc3cfca129ed"} Oct 08 14:47:36 crc kubenswrapper[4624]: I1008 14:47:36.782166 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" podStartSLOduration=1.829845288 podStartE2EDuration="12.782146647s" podCreationTimestamp="2025-10-08 14:47:24 +0000 UTC" firstStartedPulling="2025-10-08 14:47:25.406767956 +0000 UTC m=+1470.557703033" lastFinishedPulling="2025-10-08 14:47:36.359069315 +0000 UTC m=+1481.510004392" observedRunningTime="2025-10-08 14:47:36.775042557 +0000 UTC m=+1481.925977634" watchObservedRunningTime="2025-10-08 14:47:36.782146647 +0000 UTC m=+1481.933081724" Oct 08 14:47:39 crc kubenswrapper[4624]: I1008 14:47:39.220194 4624 scope.go:117] "RemoveContainer" containerID="da719f698a783f03a9aeca3f8634dbc83f874d6792863ed431947ce50e36881e" Oct 08 14:47:48 crc kubenswrapper[4624]: I1008 14:47:48.877044 4624 generic.go:334] "Generic (PLEG): container finished" podID="89f05ed9-3980-46e8-96b7-ef08d01f09ee" containerID="bad0d1d6e976ee2491a9a98928ca4839428ca897edabfc78de82dc3cfca129ed" exitCode=0 Oct 08 14:47:48 crc kubenswrapper[4624]: I1008 14:47:48.877093 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" event={"ID":"89f05ed9-3980-46e8-96b7-ef08d01f09ee","Type":"ContainerDied","Data":"bad0d1d6e976ee2491a9a98928ca4839428ca897edabfc78de82dc3cfca129ed"} Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.333203 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.483809 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdv26\" (UniqueName: \"kubernetes.io/projected/89f05ed9-3980-46e8-96b7-ef08d01f09ee-kube-api-access-gdv26\") pod \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\" (UID: \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\") " Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.483880 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f05ed9-3980-46e8-96b7-ef08d01f09ee-repo-setup-combined-ca-bundle\") pod \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\" (UID: \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\") " Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.483986 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89f05ed9-3980-46e8-96b7-ef08d01f09ee-inventory\") pod \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\" (UID: \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\") " Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.484191 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89f05ed9-3980-46e8-96b7-ef08d01f09ee-ssh-key\") pod \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\" (UID: \"89f05ed9-3980-46e8-96b7-ef08d01f09ee\") " Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.489749 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f05ed9-3980-46e8-96b7-ef08d01f09ee-kube-api-access-gdv26" (OuterVolumeSpecName: "kube-api-access-gdv26") pod "89f05ed9-3980-46e8-96b7-ef08d01f09ee" (UID: "89f05ed9-3980-46e8-96b7-ef08d01f09ee"). InnerVolumeSpecName "kube-api-access-gdv26". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.493478 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f05ed9-3980-46e8-96b7-ef08d01f09ee-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "89f05ed9-3980-46e8-96b7-ef08d01f09ee" (UID: "89f05ed9-3980-46e8-96b7-ef08d01f09ee"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.516796 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f05ed9-3980-46e8-96b7-ef08d01f09ee-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "89f05ed9-3980-46e8-96b7-ef08d01f09ee" (UID: "89f05ed9-3980-46e8-96b7-ef08d01f09ee"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.521245 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f05ed9-3980-46e8-96b7-ef08d01f09ee-inventory" (OuterVolumeSpecName: "inventory") pod "89f05ed9-3980-46e8-96b7-ef08d01f09ee" (UID: "89f05ed9-3980-46e8-96b7-ef08d01f09ee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.587802 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89f05ed9-3980-46e8-96b7-ef08d01f09ee-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.587841 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdv26\" (UniqueName: \"kubernetes.io/projected/89f05ed9-3980-46e8-96b7-ef08d01f09ee-kube-api-access-gdv26\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.587857 4624 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f05ed9-3980-46e8-96b7-ef08d01f09ee-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.587973 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89f05ed9-3980-46e8-96b7-ef08d01f09ee-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.897228 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" event={"ID":"89f05ed9-3980-46e8-96b7-ef08d01f09ee","Type":"ContainerDied","Data":"cae406dc4f79726fbc16dd4eaf3fc664dd499658e23f5df1531ab43c9d956c2d"} Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.897588 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cae406dc4f79726fbc16dd4eaf3fc664dd499658e23f5df1531ab43c9d956c2d" Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.897275 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8" Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.983435 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls"] Oct 08 14:47:50 crc kubenswrapper[4624]: E1008 14:47:50.983998 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f05ed9-3980-46e8-96b7-ef08d01f09ee" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.984021 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f05ed9-3980-46e8-96b7-ef08d01f09ee" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.984267 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f05ed9-3980-46e8-96b7-ef08d01f09ee" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.985085 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls" Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.987527 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.988000 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.988077 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 08 14:47:50 crc kubenswrapper[4624]: I1008 14:47:50.990650 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v6sxt" Oct 08 14:47:51 crc kubenswrapper[4624]: I1008 14:47:51.002552 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls"] Oct 08 14:47:51 crc kubenswrapper[4624]: I1008 14:47:51.096750 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5czdl\" (UniqueName: \"kubernetes.io/projected/eda4ead7-208d-4aed-9f74-ef58b401d591-kube-api-access-5czdl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9rfls\" (UID: \"eda4ead7-208d-4aed-9f74-ef58b401d591\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls" Oct 08 14:47:51 crc kubenswrapper[4624]: I1008 14:47:51.096813 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eda4ead7-208d-4aed-9f74-ef58b401d591-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9rfls\" (UID: \"eda4ead7-208d-4aed-9f74-ef58b401d591\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls" Oct 08 14:47:51 crc kubenswrapper[4624]: I1008 14:47:51.097180 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eda4ead7-208d-4aed-9f74-ef58b401d591-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9rfls\" (UID: \"eda4ead7-208d-4aed-9f74-ef58b401d591\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls" Oct 08 14:47:51 crc kubenswrapper[4624]: I1008 14:47:51.199048 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5czdl\" (UniqueName: \"kubernetes.io/projected/eda4ead7-208d-4aed-9f74-ef58b401d591-kube-api-access-5czdl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9rfls\" (UID: \"eda4ead7-208d-4aed-9f74-ef58b401d591\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls" Oct 08 14:47:51 crc kubenswrapper[4624]: I1008 14:47:51.199130 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eda4ead7-208d-4aed-9f74-ef58b401d591-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9rfls\" (UID: \"eda4ead7-208d-4aed-9f74-ef58b401d591\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls" Oct 08 14:47:51 crc kubenswrapper[4624]: I1008 14:47:51.199242 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eda4ead7-208d-4aed-9f74-ef58b401d591-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9rfls\" (UID: \"eda4ead7-208d-4aed-9f74-ef58b401d591\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls" Oct 08 14:47:51 crc kubenswrapper[4624]: I1008 14:47:51.204089 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eda4ead7-208d-4aed-9f74-ef58b401d591-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9rfls\" (UID: \"eda4ead7-208d-4aed-9f74-ef58b401d591\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls" Oct 08 14:47:51 crc kubenswrapper[4624]: I1008 14:47:51.204095 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eda4ead7-208d-4aed-9f74-ef58b401d591-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9rfls\" (UID: \"eda4ead7-208d-4aed-9f74-ef58b401d591\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls" Oct 08 14:47:51 crc kubenswrapper[4624]: I1008 14:47:51.217819 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5czdl\" (UniqueName: \"kubernetes.io/projected/eda4ead7-208d-4aed-9f74-ef58b401d591-kube-api-access-5czdl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9rfls\" (UID: \"eda4ead7-208d-4aed-9f74-ef58b401d591\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls" Oct 08 14:47:51 crc kubenswrapper[4624]: I1008 14:47:51.301087 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls" Oct 08 14:47:51 crc kubenswrapper[4624]: I1008 14:47:51.879343 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls"] Oct 08 14:47:51 crc kubenswrapper[4624]: W1008 14:47:51.895916 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeda4ead7_208d_4aed_9f74_ef58b401d591.slice/crio-8c7e9f7077ffc81c2d14ac5f5278051b1a222195529eceef34f793ae18f337a8 WatchSource:0}: Error finding container 8c7e9f7077ffc81c2d14ac5f5278051b1a222195529eceef34f793ae18f337a8: Status 404 returned error can't find the container with id 8c7e9f7077ffc81c2d14ac5f5278051b1a222195529eceef34f793ae18f337a8 Oct 08 14:47:51 crc kubenswrapper[4624]: I1008 14:47:51.909960 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls" event={"ID":"eda4ead7-208d-4aed-9f74-ef58b401d591","Type":"ContainerStarted","Data":"8c7e9f7077ffc81c2d14ac5f5278051b1a222195529eceef34f793ae18f337a8"} Oct 08 14:47:52 crc kubenswrapper[4624]: I1008 14:47:52.921319 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls" event={"ID":"eda4ead7-208d-4aed-9f74-ef58b401d591","Type":"ContainerStarted","Data":"7c641b7a9fcbf5d13be447799a9053eaea96b4bde6935a6c53546332e8499dec"} Oct 08 14:47:52 crc kubenswrapper[4624]: I1008 14:47:52.940563 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls" podStartSLOduration=2.361079324 podStartE2EDuration="2.94054374s" podCreationTimestamp="2025-10-08 14:47:50 +0000 UTC" firstStartedPulling="2025-10-08 14:47:51.89806361 +0000 UTC m=+1497.048998687" lastFinishedPulling="2025-10-08 14:47:52.477528026 +0000 UTC m=+1497.628463103" observedRunningTime="2025-10-08 14:47:52.935179192 +0000 UTC m=+1498.086114289" watchObservedRunningTime="2025-10-08 14:47:52.94054374 +0000 UTC m=+1498.091478817" Oct 08 14:47:55 crc kubenswrapper[4624]: I1008 14:47:55.948599 4624 generic.go:334] "Generic (PLEG): container finished" podID="eda4ead7-208d-4aed-9f74-ef58b401d591" containerID="7c641b7a9fcbf5d13be447799a9053eaea96b4bde6935a6c53546332e8499dec" exitCode=0 Oct 08 14:47:55 crc kubenswrapper[4624]: I1008 14:47:55.948808 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls" event={"ID":"eda4ead7-208d-4aed-9f74-ef58b401d591","Type":"ContainerDied","Data":"7c641b7a9fcbf5d13be447799a9053eaea96b4bde6935a6c53546332e8499dec"} Oct 08 14:47:57 crc kubenswrapper[4624]: I1008 14:47:57.384214 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls" Oct 08 14:47:57 crc kubenswrapper[4624]: I1008 14:47:57.543894 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eda4ead7-208d-4aed-9f74-ef58b401d591-inventory\") pod \"eda4ead7-208d-4aed-9f74-ef58b401d591\" (UID: \"eda4ead7-208d-4aed-9f74-ef58b401d591\") " Oct 08 14:47:57 crc kubenswrapper[4624]: I1008 14:47:57.544225 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5czdl\" (UniqueName: \"kubernetes.io/projected/eda4ead7-208d-4aed-9f74-ef58b401d591-kube-api-access-5czdl\") pod \"eda4ead7-208d-4aed-9f74-ef58b401d591\" (UID: \"eda4ead7-208d-4aed-9f74-ef58b401d591\") " Oct 08 14:47:57 crc kubenswrapper[4624]: I1008 14:47:57.544422 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eda4ead7-208d-4aed-9f74-ef58b401d591-ssh-key\") pod \"eda4ead7-208d-4aed-9f74-ef58b401d591\" (UID: \"eda4ead7-208d-4aed-9f74-ef58b401d591\") " Oct 08 14:47:57 crc kubenswrapper[4624]: I1008 14:47:57.552439 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda4ead7-208d-4aed-9f74-ef58b401d591-kube-api-access-5czdl" (OuterVolumeSpecName: "kube-api-access-5czdl") pod "eda4ead7-208d-4aed-9f74-ef58b401d591" (UID: "eda4ead7-208d-4aed-9f74-ef58b401d591"). InnerVolumeSpecName "kube-api-access-5czdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:47:57 crc kubenswrapper[4624]: I1008 14:47:57.576694 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda4ead7-208d-4aed-9f74-ef58b401d591-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eda4ead7-208d-4aed-9f74-ef58b401d591" (UID: "eda4ead7-208d-4aed-9f74-ef58b401d591"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:47:57 crc kubenswrapper[4624]: I1008 14:47:57.579785 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda4ead7-208d-4aed-9f74-ef58b401d591-inventory" (OuterVolumeSpecName: "inventory") pod "eda4ead7-208d-4aed-9f74-ef58b401d591" (UID: "eda4ead7-208d-4aed-9f74-ef58b401d591"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:47:57 crc kubenswrapper[4624]: I1008 14:47:57.647726 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eda4ead7-208d-4aed-9f74-ef58b401d591-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:57 crc kubenswrapper[4624]: I1008 14:47:57.647776 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eda4ead7-208d-4aed-9f74-ef58b401d591-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:57 crc kubenswrapper[4624]: I1008 14:47:57.647787 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5czdl\" (UniqueName: \"kubernetes.io/projected/eda4ead7-208d-4aed-9f74-ef58b401d591-kube-api-access-5czdl\") on node \"crc\" DevicePath \"\"" Oct 08 14:47:57 crc kubenswrapper[4624]: I1008 14:47:57.970590 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls" event={"ID":"eda4ead7-208d-4aed-9f74-ef58b401d591","Type":"ContainerDied","Data":"8c7e9f7077ffc81c2d14ac5f5278051b1a222195529eceef34f793ae18f337a8"} Oct 08 14:47:57 crc kubenswrapper[4624]: I1008 14:47:57.970668 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c7e9f7077ffc81c2d14ac5f5278051b1a222195529eceef34f793ae18f337a8" Oct 08 14:47:57 crc kubenswrapper[4624]: I1008 14:47:57.970726 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9rfls" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.061176 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf"] Oct 08 14:47:58 crc kubenswrapper[4624]: E1008 14:47:58.061702 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda4ead7-208d-4aed-9f74-ef58b401d591" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.061723 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda4ead7-208d-4aed-9f74-ef58b401d591" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.061946 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda4ead7-208d-4aed-9f74-ef58b401d591" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.062594 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.065849 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v6sxt" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.066101 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.066137 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.066204 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.097616 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf"] Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.157148 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsgsn\" (UniqueName: \"kubernetes.io/projected/f0b620ad-209a-49a8-90cd-f4780a2565a3-kube-api-access-fsgsn\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf\" (UID: \"f0b620ad-209a-49a8-90cd-f4780a2565a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.157227 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b620ad-209a-49a8-90cd-f4780a2565a3-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf\" (UID: \"f0b620ad-209a-49a8-90cd-f4780a2565a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.157265 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0b620ad-209a-49a8-90cd-f4780a2565a3-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf\" (UID: \"f0b620ad-209a-49a8-90cd-f4780a2565a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.157298 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f0b620ad-209a-49a8-90cd-f4780a2565a3-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf\" (UID: \"f0b620ad-209a-49a8-90cd-f4780a2565a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.258922 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0b620ad-209a-49a8-90cd-f4780a2565a3-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf\" (UID: \"f0b620ad-209a-49a8-90cd-f4780a2565a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.258986 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f0b620ad-209a-49a8-90cd-f4780a2565a3-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf\" (UID: \"f0b620ad-209a-49a8-90cd-f4780a2565a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.259089 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsgsn\" (UniqueName: \"kubernetes.io/projected/f0b620ad-209a-49a8-90cd-f4780a2565a3-kube-api-access-fsgsn\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf\" (UID: \"f0b620ad-209a-49a8-90cd-f4780a2565a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.259142 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b620ad-209a-49a8-90cd-f4780a2565a3-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf\" (UID: \"f0b620ad-209a-49a8-90cd-f4780a2565a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.268406 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f0b620ad-209a-49a8-90cd-f4780a2565a3-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf\" (UID: \"f0b620ad-209a-49a8-90cd-f4780a2565a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.273606 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b620ad-209a-49a8-90cd-f4780a2565a3-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf\" (UID: \"f0b620ad-209a-49a8-90cd-f4780a2565a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.277858 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsgsn\" (UniqueName: \"kubernetes.io/projected/f0b620ad-209a-49a8-90cd-f4780a2565a3-kube-api-access-fsgsn\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf\" (UID: \"f0b620ad-209a-49a8-90cd-f4780a2565a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.281423 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0b620ad-209a-49a8-90cd-f4780a2565a3-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf\" (UID: \"f0b620ad-209a-49a8-90cd-f4780a2565a3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.383597 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" Oct 08 14:47:58 crc kubenswrapper[4624]: W1008 14:47:58.927776 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0b620ad_209a_49a8_90cd_f4780a2565a3.slice/crio-4edc8d09c588cea7b88dfa3719f7a350c952184365ce49e14494f1620c0f780d WatchSource:0}: Error finding container 4edc8d09c588cea7b88dfa3719f7a350c952184365ce49e14494f1620c0f780d: Status 404 returned error can't find the container with id 4edc8d09c588cea7b88dfa3719f7a350c952184365ce49e14494f1620c0f780d Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.928619 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf"] Oct 08 14:47:58 crc kubenswrapper[4624]: I1008 14:47:58.980333 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" event={"ID":"f0b620ad-209a-49a8-90cd-f4780a2565a3","Type":"ContainerStarted","Data":"4edc8d09c588cea7b88dfa3719f7a350c952184365ce49e14494f1620c0f780d"} Oct 08 14:47:59 crc kubenswrapper[4624]: I1008 14:47:59.989710 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" event={"ID":"f0b620ad-209a-49a8-90cd-f4780a2565a3","Type":"ContainerStarted","Data":"85fbd5935494b09f9f9671d31cb3ffadd4d27321e30d2669f08de183f75cc3a6"} Oct 08 14:48:00 crc kubenswrapper[4624]: I1008 14:48:00.010396 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" podStartSLOduration=1.557835694 podStartE2EDuration="2.010377007s" podCreationTimestamp="2025-10-08 14:47:58 +0000 UTC" firstStartedPulling="2025-10-08 14:47:58.93026782 +0000 UTC m=+1504.081202897" lastFinishedPulling="2025-10-08 14:47:59.382809133 +0000 UTC m=+1504.533744210" observedRunningTime="2025-10-08 14:48:00.002678229 +0000 UTC m=+1505.153613336" watchObservedRunningTime="2025-10-08 14:48:00.010377007 +0000 UTC m=+1505.161312084" Oct 08 14:48:00 crc kubenswrapper[4624]: I1008 14:48:00.077269 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:48:00 crc kubenswrapper[4624]: I1008 14:48:00.077384 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:48:30 crc kubenswrapper[4624]: I1008 14:48:30.076499 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:48:30 crc kubenswrapper[4624]: I1008 14:48:30.077127 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:49:00 crc kubenswrapper[4624]: I1008 14:49:00.077423 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:49:00 crc kubenswrapper[4624]: I1008 14:49:00.078277 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:49:00 crc kubenswrapper[4624]: I1008 14:49:00.078323 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:49:00 crc kubenswrapper[4624]: I1008 14:49:00.079133 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:49:00 crc kubenswrapper[4624]: I1008 14:49:00.079191 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" gracePeriod=600 Oct 08 14:49:00 crc kubenswrapper[4624]: E1008 14:49:00.203525 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:49:00 crc kubenswrapper[4624]: I1008 14:49:00.562277 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" exitCode=0 Oct 08 14:49:00 crc kubenswrapper[4624]: I1008 14:49:00.562344 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c"} Oct 08 14:49:00 crc kubenswrapper[4624]: I1008 14:49:00.562710 4624 scope.go:117] "RemoveContainer" containerID="b715151b06e81079e843f1e4731eae829cdb61d676150f9179acbd18ff769765" Oct 08 14:49:00 crc kubenswrapper[4624]: I1008 14:49:00.563495 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:49:00 crc kubenswrapper[4624]: E1008 14:49:00.563881 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:49:13 crc kubenswrapper[4624]: I1008 14:49:13.465994 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:49:13 crc kubenswrapper[4624]: E1008 14:49:13.466792 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:49:27 crc kubenswrapper[4624]: I1008 14:49:27.466510 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:49:27 crc kubenswrapper[4624]: E1008 14:49:27.467966 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:49:39 crc kubenswrapper[4624]: I1008 14:49:39.360966 4624 scope.go:117] "RemoveContainer" containerID="725adcb0716670ee444ee9dd9d584c172a9fb89a97fa996a1d8b72c815b9b0b1" Oct 08 14:49:39 crc kubenswrapper[4624]: I1008 14:49:39.387501 4624 scope.go:117] "RemoveContainer" containerID="63bdb4cf09e0d7708e6fa813c9015b0fa2983784b0cc6cad32e382cdb3c47693" Oct 08 14:49:39 crc kubenswrapper[4624]: I1008 14:49:39.412014 4624 scope.go:117] "RemoveContainer" containerID="81d7020e8fd2ee713efdddc70593076cfc3ad528dcfd090a661e86971f2768f0" Oct 08 14:49:39 crc kubenswrapper[4624]: I1008 14:49:39.433331 4624 scope.go:117] "RemoveContainer" containerID="f15c7028b09f795119ae44762e5a19c9b419b53eeb32590924d3deea617a6249" Oct 08 14:49:41 crc kubenswrapper[4624]: I1008 14:49:41.466287 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:49:41 crc kubenswrapper[4624]: E1008 14:49:41.466881 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:49:56 crc kubenswrapper[4624]: I1008 14:49:56.465620 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:49:56 crc kubenswrapper[4624]: E1008 14:49:56.466430 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:50:06 crc kubenswrapper[4624]: I1008 14:50:06.058612 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-4jnw5"] Oct 08 14:50:06 crc kubenswrapper[4624]: I1008 14:50:06.072018 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-4jnw5"] Oct 08 14:50:07 crc kubenswrapper[4624]: I1008 14:50:07.028690 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-wkgnr"] Oct 08 14:50:07 crc kubenswrapper[4624]: I1008 14:50:07.038966 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-6xs6m"] Oct 08 14:50:07 crc kubenswrapper[4624]: I1008 14:50:07.049335 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-zvn72"] Oct 08 14:50:07 crc kubenswrapper[4624]: I1008 14:50:07.057868 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-wkgnr"] Oct 08 14:50:07 crc kubenswrapper[4624]: I1008 14:50:07.067461 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-6xs6m"] Oct 08 14:50:07 crc kubenswrapper[4624]: I1008 14:50:07.076926 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-zvn72"] Oct 08 14:50:07 crc kubenswrapper[4624]: I1008 14:50:07.476857 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e56418a-dbf3-46b1-9f37-c004b9abf454" path="/var/lib/kubelet/pods/0e56418a-dbf3-46b1-9f37-c004b9abf454/volumes" Oct 08 14:50:07 crc kubenswrapper[4624]: I1008 14:50:07.481156 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65bca6e6-25cd-4879-a092-b9e24b2aa1e3" path="/var/lib/kubelet/pods/65bca6e6-25cd-4879-a092-b9e24b2aa1e3/volumes" Oct 08 14:50:07 crc kubenswrapper[4624]: I1008 14:50:07.484568 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a449627c-78bf-4c07-bda1-f9f3005d4782" path="/var/lib/kubelet/pods/a449627c-78bf-4c07-bda1-f9f3005d4782/volumes" Oct 08 14:50:07 crc kubenswrapper[4624]: I1008 14:50:07.486894 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae7533e2-e271-4dcf-94c1-6bf63f0c4c54" path="/var/lib/kubelet/pods/ae7533e2-e271-4dcf-94c1-6bf63f0c4c54/volumes" Oct 08 14:50:08 crc kubenswrapper[4624]: I1008 14:50:08.038056 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-fsb77"] Oct 08 14:50:08 crc kubenswrapper[4624]: I1008 14:50:08.045920 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-mkdvw"] Oct 08 14:50:08 crc kubenswrapper[4624]: I1008 14:50:08.053791 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-mkdvw"] Oct 08 14:50:08 crc kubenswrapper[4624]: I1008 14:50:08.062583 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-fsb77"] Oct 08 14:50:09 crc kubenswrapper[4624]: I1008 14:50:09.479241 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5747e2-432a-4b1e-9a58-f65736759415" path="/var/lib/kubelet/pods/2a5747e2-432a-4b1e-9a58-f65736759415/volumes" Oct 08 14:50:09 crc kubenswrapper[4624]: I1008 14:50:09.481615 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c6409de-1f13-4742-8956-3a1aa3bb60c7" path="/var/lib/kubelet/pods/8c6409de-1f13-4742-8956-3a1aa3bb60c7/volumes" Oct 08 14:50:10 crc kubenswrapper[4624]: I1008 14:50:10.466862 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:50:10 crc kubenswrapper[4624]: E1008 14:50:10.467177 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:50:13 crc kubenswrapper[4624]: I1008 14:50:13.032264 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-22cr4"] Oct 08 14:50:13 crc kubenswrapper[4624]: I1008 14:50:13.040362 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-22cr4"] Oct 08 14:50:13 crc kubenswrapper[4624]: I1008 14:50:13.502870 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="437a3bae-10d6-48a7-b3f1-28988464e615" path="/var/lib/kubelet/pods/437a3bae-10d6-48a7-b3f1-28988464e615/volumes" Oct 08 14:50:15 crc kubenswrapper[4624]: I1008 14:50:15.039721 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-9f4c-account-create-rskn9"] Oct 08 14:50:15 crc kubenswrapper[4624]: I1008 14:50:15.048221 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-df8b-account-create-brp6p"] Oct 08 14:50:15 crc kubenswrapper[4624]: I1008 14:50:15.057469 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-df8b-account-create-brp6p"] Oct 08 14:50:15 crc kubenswrapper[4624]: I1008 14:50:15.065609 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-9f4c-account-create-rskn9"] Oct 08 14:50:15 crc kubenswrapper[4624]: I1008 14:50:15.481602 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fa399e1-303c-4d95-be42-ea2eb2cd38b2" path="/var/lib/kubelet/pods/0fa399e1-303c-4d95-be42-ea2eb2cd38b2/volumes" Oct 08 14:50:15 crc kubenswrapper[4624]: I1008 14:50:15.484326 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="911ca055-f986-45c5-9e4c-887293b628f5" path="/var/lib/kubelet/pods/911ca055-f986-45c5-9e4c-887293b628f5/volumes" Oct 08 14:50:16 crc kubenswrapper[4624]: I1008 14:50:16.026724 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-66fe-account-create-l4qnn"] Oct 08 14:50:16 crc kubenswrapper[4624]: I1008 14:50:16.035750 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-66fe-account-create-l4qnn"] Oct 08 14:50:17 crc kubenswrapper[4624]: I1008 14:50:17.047450 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-611c-account-create-vzdj5"] Oct 08 14:50:17 crc kubenswrapper[4624]: I1008 14:50:17.058977 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-611c-account-create-vzdj5"] Oct 08 14:50:17 crc kubenswrapper[4624]: I1008 14:50:17.479500 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="313a7281-f294-44b0-be3f-f30d66b0470c" path="/var/lib/kubelet/pods/313a7281-f294-44b0-be3f-f30d66b0470c/volumes" Oct 08 14:50:17 crc kubenswrapper[4624]: I1008 14:50:17.481169 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5536330e-7457-4138-a4c4-c511f839b45b" path="/var/lib/kubelet/pods/5536330e-7457-4138-a4c4-c511f839b45b/volumes" Oct 08 14:50:18 crc kubenswrapper[4624]: I1008 14:50:18.032928 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7a5c-account-create-gvjwb"] Oct 08 14:50:18 crc kubenswrapper[4624]: I1008 14:50:18.045680 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7a5c-account-create-gvjwb"] Oct 08 14:50:19 crc kubenswrapper[4624]: I1008 14:50:19.477615 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b50601-d82a-485f-972b-0846b5ff00a3" path="/var/lib/kubelet/pods/f6b50601-d82a-485f-972b-0846b5ff00a3/volumes" Oct 08 14:50:22 crc kubenswrapper[4624]: I1008 14:50:22.466282 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:50:22 crc kubenswrapper[4624]: E1008 14:50:22.467181 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:50:27 crc kubenswrapper[4624]: I1008 14:50:27.031844 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-74e3-account-create-nrtpc"] Oct 08 14:50:27 crc kubenswrapper[4624]: I1008 14:50:27.051828 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-74e3-account-create-nrtpc"] Oct 08 14:50:27 crc kubenswrapper[4624]: I1008 14:50:27.477338 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c" path="/var/lib/kubelet/pods/c3ffd4ca-b076-4b65-b46f-e60aa6a0d40c/volumes" Oct 08 14:50:29 crc kubenswrapper[4624]: I1008 14:50:29.026763 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-1b7f-account-create-2cp9t"] Oct 08 14:50:29 crc kubenswrapper[4624]: I1008 14:50:29.034991 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-1b7f-account-create-2cp9t"] Oct 08 14:50:29 crc kubenswrapper[4624]: I1008 14:50:29.481317 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a5621c9-5bfe-41bc-bcbc-278553832e31" path="/var/lib/kubelet/pods/7a5621c9-5bfe-41bc-bcbc-278553832e31/volumes" Oct 08 14:50:33 crc kubenswrapper[4624]: I1008 14:50:33.030248 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-kzh2n"] Oct 08 14:50:33 crc kubenswrapper[4624]: I1008 14:50:33.039887 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-kzh2n"] Oct 08 14:50:33 crc kubenswrapper[4624]: I1008 14:50:33.527210 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c53d0a2-ccb4-43e8-8a32-e327f0062e46" path="/var/lib/kubelet/pods/8c53d0a2-ccb4-43e8-8a32-e327f0062e46/volumes" Oct 08 14:50:36 crc kubenswrapper[4624]: I1008 14:50:36.466451 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:50:36 crc kubenswrapper[4624]: E1008 14:50:36.467983 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:50:39 crc kubenswrapper[4624]: I1008 14:50:39.498456 4624 scope.go:117] "RemoveContainer" containerID="39f012af7f8806e54388f8939710bdd3da801746c0b5bfe5c02626e0b7a1276b" Oct 08 14:50:39 crc kubenswrapper[4624]: I1008 14:50:39.527369 4624 scope.go:117] "RemoveContainer" containerID="25bf8d7723626d5e4fcccd5f63c9cbcfc263b7f17579ee59eed6f1739e4b9bbd" Oct 08 14:50:39 crc kubenswrapper[4624]: I1008 14:50:39.579291 4624 scope.go:117] "RemoveContainer" containerID="c495212cee53eaee149dc7e22847022abb8125306fedbaf12617a4ee7b7713e2" Oct 08 14:50:39 crc kubenswrapper[4624]: I1008 14:50:39.625154 4624 scope.go:117] "RemoveContainer" containerID="f5dda5876a71f2a0f49ab717a44bdd894ae30d85d1693a901af8b679ef7da61b" Oct 08 14:50:39 crc kubenswrapper[4624]: I1008 14:50:39.667104 4624 scope.go:117] "RemoveContainer" containerID="50a930ad5cabc8683b9e105db383d823b914f559a1e5f9ff390d060faba970b4" Oct 08 14:50:39 crc kubenswrapper[4624]: I1008 14:50:39.716394 4624 scope.go:117] "RemoveContainer" containerID="5972ddb9633aae76b68bf6e18b4cf189ee2e8b5d22f228f9231492c1672b62a6" Oct 08 14:50:39 crc kubenswrapper[4624]: I1008 14:50:39.753893 4624 scope.go:117] "RemoveContainer" containerID="bd07234c85207270617760fc40256f4f85567d95f3083ad3cdc0c7e2cb7bbf5c" Oct 08 14:50:39 crc kubenswrapper[4624]: I1008 14:50:39.779567 4624 scope.go:117] "RemoveContainer" containerID="18af941d3f8852df34b84acbb562a097ee5d40126ad0289aad7d62bb72069e1f" Oct 08 14:50:39 crc kubenswrapper[4624]: I1008 14:50:39.799948 4624 scope.go:117] "RemoveContainer" containerID="d14385dae5a640de6514fa96bdea43d5442497114f44ba80cb5f671931df8294" Oct 08 14:50:39 crc kubenswrapper[4624]: I1008 14:50:39.975211 4624 scope.go:117] "RemoveContainer" containerID="a08c5a9c2018aa0d54b5653835390b457a45df63fc27622ed699c7b67756d9c8" Oct 08 14:50:39 crc kubenswrapper[4624]: I1008 14:50:39.999091 4624 scope.go:117] "RemoveContainer" containerID="f1fe09702bb1166a4ad7e33b7ae1d61ad4fa04fa8294451c7912505dbf61d33c" Oct 08 14:50:40 crc kubenswrapper[4624]: I1008 14:50:40.024462 4624 scope.go:117] "RemoveContainer" containerID="1f4dc9a3693927fa09aa9e5e82a1bb606ceb63ac4ad39ee637b4a2a25dec7207" Oct 08 14:50:40 crc kubenswrapper[4624]: I1008 14:50:40.050893 4624 scope.go:117] "RemoveContainer" containerID="6e5c7b31db779b33f078cd1400bcee9d48e55d7b8e1329467adc1665529e7de0" Oct 08 14:50:40 crc kubenswrapper[4624]: I1008 14:50:40.078465 4624 scope.go:117] "RemoveContainer" containerID="18295076c5711c0f9bc231bd7399aef397c237b2f274087f8895fa7c1af03fed" Oct 08 14:50:40 crc kubenswrapper[4624]: I1008 14:50:40.105575 4624 scope.go:117] "RemoveContainer" containerID="e0f79010ed21da611a62ead15c25bef52bd48612e32360a74025568a44da884d" Oct 08 14:50:40 crc kubenswrapper[4624]: I1008 14:50:40.132312 4624 scope.go:117] "RemoveContainer" containerID="cdcddc55c7b85eb86641f9732aacc969299ac06c416c2ecbfb2d521bac89c650" Oct 08 14:50:50 crc kubenswrapper[4624]: I1008 14:50:50.465426 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:50:50 crc kubenswrapper[4624]: E1008 14:50:50.466211 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:51:01 crc kubenswrapper[4624]: I1008 14:51:01.466170 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:51:01 crc kubenswrapper[4624]: E1008 14:51:01.467166 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:51:12 crc kubenswrapper[4624]: I1008 14:51:12.815280 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lkmzx"] Oct 08 14:51:12 crc kubenswrapper[4624]: I1008 14:51:12.817799 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkmzx" Oct 08 14:51:12 crc kubenswrapper[4624]: I1008 14:51:12.909912 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a151b424-a275-4087-9112-d66471d519cd-utilities\") pod \"certified-operators-lkmzx\" (UID: \"a151b424-a275-4087-9112-d66471d519cd\") " pod="openshift-marketplace/certified-operators-lkmzx" Oct 08 14:51:12 crc kubenswrapper[4624]: I1008 14:51:12.910048 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnkbr\" (UniqueName: \"kubernetes.io/projected/a151b424-a275-4087-9112-d66471d519cd-kube-api-access-hnkbr\") pod \"certified-operators-lkmzx\" (UID: \"a151b424-a275-4087-9112-d66471d519cd\") " pod="openshift-marketplace/certified-operators-lkmzx" Oct 08 14:51:12 crc kubenswrapper[4624]: I1008 14:51:12.910114 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a151b424-a275-4087-9112-d66471d519cd-catalog-content\") pod \"certified-operators-lkmzx\" (UID: \"a151b424-a275-4087-9112-d66471d519cd\") " pod="openshift-marketplace/certified-operators-lkmzx" Oct 08 14:51:12 crc kubenswrapper[4624]: I1008 14:51:12.942126 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lkmzx"] Oct 08 14:51:13 crc kubenswrapper[4624]: I1008 14:51:13.012078 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnkbr\" (UniqueName: \"kubernetes.io/projected/a151b424-a275-4087-9112-d66471d519cd-kube-api-access-hnkbr\") pod \"certified-operators-lkmzx\" (UID: \"a151b424-a275-4087-9112-d66471d519cd\") " pod="openshift-marketplace/certified-operators-lkmzx" Oct 08 14:51:13 crc kubenswrapper[4624]: I1008 14:51:13.012157 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a151b424-a275-4087-9112-d66471d519cd-catalog-content\") pod \"certified-operators-lkmzx\" (UID: \"a151b424-a275-4087-9112-d66471d519cd\") " pod="openshift-marketplace/certified-operators-lkmzx" Oct 08 14:51:13 crc kubenswrapper[4624]: I1008 14:51:13.012240 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a151b424-a275-4087-9112-d66471d519cd-utilities\") pod \"certified-operators-lkmzx\" (UID: \"a151b424-a275-4087-9112-d66471d519cd\") " pod="openshift-marketplace/certified-operators-lkmzx" Oct 08 14:51:13 crc kubenswrapper[4624]: I1008 14:51:13.012907 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a151b424-a275-4087-9112-d66471d519cd-utilities\") pod \"certified-operators-lkmzx\" (UID: \"a151b424-a275-4087-9112-d66471d519cd\") " pod="openshift-marketplace/certified-operators-lkmzx" Oct 08 14:51:13 crc kubenswrapper[4624]: I1008 14:51:13.013192 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a151b424-a275-4087-9112-d66471d519cd-catalog-content\") pod \"certified-operators-lkmzx\" (UID: \"a151b424-a275-4087-9112-d66471d519cd\") " pod="openshift-marketplace/certified-operators-lkmzx" Oct 08 14:51:13 crc kubenswrapper[4624]: I1008 14:51:13.041419 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnkbr\" (UniqueName: \"kubernetes.io/projected/a151b424-a275-4087-9112-d66471d519cd-kube-api-access-hnkbr\") pod \"certified-operators-lkmzx\" (UID: \"a151b424-a275-4087-9112-d66471d519cd\") " pod="openshift-marketplace/certified-operators-lkmzx" Oct 08 14:51:13 crc kubenswrapper[4624]: I1008 14:51:13.135330 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkmzx" Oct 08 14:51:13 crc kubenswrapper[4624]: I1008 14:51:13.632079 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lkmzx"] Oct 08 14:51:13 crc kubenswrapper[4624]: I1008 14:51:13.736661 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkmzx" event={"ID":"a151b424-a275-4087-9112-d66471d519cd","Type":"ContainerStarted","Data":"cc112f837dd4b0f5a3118457f5ebb7606b81e20dd6aa7d95a45f2af2a080b6f4"} Oct 08 14:51:14 crc kubenswrapper[4624]: I1008 14:51:14.747221 4624 generic.go:334] "Generic (PLEG): container finished" podID="a151b424-a275-4087-9112-d66471d519cd" containerID="cab6c34fb7a8118f73ac22cb5b80414de06253a05c76f76d7e5992de4d386dd8" exitCode=0 Oct 08 14:51:14 crc kubenswrapper[4624]: I1008 14:51:14.747317 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkmzx" event={"ID":"a151b424-a275-4087-9112-d66471d519cd","Type":"ContainerDied","Data":"cab6c34fb7a8118f73ac22cb5b80414de06253a05c76f76d7e5992de4d386dd8"} Oct 08 14:51:14 crc kubenswrapper[4624]: I1008 14:51:14.752705 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 14:51:15 crc kubenswrapper[4624]: I1008 14:51:15.759587 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkmzx" event={"ID":"a151b424-a275-4087-9112-d66471d519cd","Type":"ContainerStarted","Data":"237c87217660f28e0949fafd7ca41b19d7a49da86cb0efd2c86f2a281d19f5e8"} Oct 08 14:51:16 crc kubenswrapper[4624]: I1008 14:51:16.466889 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:51:16 crc kubenswrapper[4624]: E1008 14:51:16.467833 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:51:17 crc kubenswrapper[4624]: I1008 14:51:17.778906 4624 generic.go:334] "Generic (PLEG): container finished" podID="a151b424-a275-4087-9112-d66471d519cd" containerID="237c87217660f28e0949fafd7ca41b19d7a49da86cb0efd2c86f2a281d19f5e8" exitCode=0 Oct 08 14:51:17 crc kubenswrapper[4624]: I1008 14:51:17.778956 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkmzx" event={"ID":"a151b424-a275-4087-9112-d66471d519cd","Type":"ContainerDied","Data":"237c87217660f28e0949fafd7ca41b19d7a49da86cb0efd2c86f2a281d19f5e8"} Oct 08 14:51:18 crc kubenswrapper[4624]: I1008 14:51:18.790781 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkmzx" event={"ID":"a151b424-a275-4087-9112-d66471d519cd","Type":"ContainerStarted","Data":"c21ccbfb5f572f765453f7ec2c9180e62c3c813f521f7653c70d2b4e35b4c91b"} Oct 08 14:51:18 crc kubenswrapper[4624]: I1008 14:51:18.812202 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lkmzx" podStartSLOduration=3.155032294 podStartE2EDuration="6.812179674s" podCreationTimestamp="2025-10-08 14:51:12 +0000 UTC" firstStartedPulling="2025-10-08 14:51:14.749074462 +0000 UTC m=+1699.900009539" lastFinishedPulling="2025-10-08 14:51:18.406221842 +0000 UTC m=+1703.557156919" observedRunningTime="2025-10-08 14:51:18.809384412 +0000 UTC m=+1703.960319489" watchObservedRunningTime="2025-10-08 14:51:18.812179674 +0000 UTC m=+1703.963114751" Oct 08 14:51:23 crc kubenswrapper[4624]: I1008 14:51:23.135957 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lkmzx" Oct 08 14:51:23 crc kubenswrapper[4624]: I1008 14:51:23.136541 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lkmzx" Oct 08 14:51:23 crc kubenswrapper[4624]: I1008 14:51:23.192049 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lkmzx" Oct 08 14:51:23 crc kubenswrapper[4624]: I1008 14:51:23.887055 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lkmzx" Oct 08 14:51:24 crc kubenswrapper[4624]: I1008 14:51:24.385870 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lkmzx"] Oct 08 14:51:25 crc kubenswrapper[4624]: I1008 14:51:25.851262 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lkmzx" podUID="a151b424-a275-4087-9112-d66471d519cd" containerName="registry-server" containerID="cri-o://c21ccbfb5f572f765453f7ec2c9180e62c3c813f521f7653c70d2b4e35b4c91b" gracePeriod=2 Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.513168 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkmzx" Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.619260 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a151b424-a275-4087-9112-d66471d519cd-utilities\") pod \"a151b424-a275-4087-9112-d66471d519cd\" (UID: \"a151b424-a275-4087-9112-d66471d519cd\") " Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.619439 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a151b424-a275-4087-9112-d66471d519cd-catalog-content\") pod \"a151b424-a275-4087-9112-d66471d519cd\" (UID: \"a151b424-a275-4087-9112-d66471d519cd\") " Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.619474 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnkbr\" (UniqueName: \"kubernetes.io/projected/a151b424-a275-4087-9112-d66471d519cd-kube-api-access-hnkbr\") pod \"a151b424-a275-4087-9112-d66471d519cd\" (UID: \"a151b424-a275-4087-9112-d66471d519cd\") " Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.620226 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a151b424-a275-4087-9112-d66471d519cd-utilities" (OuterVolumeSpecName: "utilities") pod "a151b424-a275-4087-9112-d66471d519cd" (UID: "a151b424-a275-4087-9112-d66471d519cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.640423 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a151b424-a275-4087-9112-d66471d519cd-kube-api-access-hnkbr" (OuterVolumeSpecName: "kube-api-access-hnkbr") pod "a151b424-a275-4087-9112-d66471d519cd" (UID: "a151b424-a275-4087-9112-d66471d519cd"). InnerVolumeSpecName "kube-api-access-hnkbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.688520 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a151b424-a275-4087-9112-d66471d519cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a151b424-a275-4087-9112-d66471d519cd" (UID: "a151b424-a275-4087-9112-d66471d519cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.723053 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a151b424-a275-4087-9112-d66471d519cd-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.723094 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnkbr\" (UniqueName: \"kubernetes.io/projected/a151b424-a275-4087-9112-d66471d519cd-kube-api-access-hnkbr\") on node \"crc\" DevicePath \"\"" Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.723111 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a151b424-a275-4087-9112-d66471d519cd-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.860466 4624 generic.go:334] "Generic (PLEG): container finished" podID="a151b424-a275-4087-9112-d66471d519cd" containerID="c21ccbfb5f572f765453f7ec2c9180e62c3c813f521f7653c70d2b4e35b4c91b" exitCode=0 Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.860513 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkmzx" event={"ID":"a151b424-a275-4087-9112-d66471d519cd","Type":"ContainerDied","Data":"c21ccbfb5f572f765453f7ec2c9180e62c3c813f521f7653c70d2b4e35b4c91b"} Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.860543 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkmzx" event={"ID":"a151b424-a275-4087-9112-d66471d519cd","Type":"ContainerDied","Data":"cc112f837dd4b0f5a3118457f5ebb7606b81e20dd6aa7d95a45f2af2a080b6f4"} Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.860547 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkmzx" Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.860561 4624 scope.go:117] "RemoveContainer" containerID="c21ccbfb5f572f765453f7ec2c9180e62c3c813f521f7653c70d2b4e35b4c91b" Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.885571 4624 scope.go:117] "RemoveContainer" containerID="237c87217660f28e0949fafd7ca41b19d7a49da86cb0efd2c86f2a281d19f5e8" Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.898710 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lkmzx"] Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.909199 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lkmzx"] Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.919792 4624 scope.go:117] "RemoveContainer" containerID="cab6c34fb7a8118f73ac22cb5b80414de06253a05c76f76d7e5992de4d386dd8" Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.969991 4624 scope.go:117] "RemoveContainer" containerID="c21ccbfb5f572f765453f7ec2c9180e62c3c813f521f7653c70d2b4e35b4c91b" Oct 08 14:51:26 crc kubenswrapper[4624]: E1008 14:51:26.970587 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c21ccbfb5f572f765453f7ec2c9180e62c3c813f521f7653c70d2b4e35b4c91b\": container with ID starting with c21ccbfb5f572f765453f7ec2c9180e62c3c813f521f7653c70d2b4e35b4c91b not found: ID does not exist" containerID="c21ccbfb5f572f765453f7ec2c9180e62c3c813f521f7653c70d2b4e35b4c91b" Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.970627 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21ccbfb5f572f765453f7ec2c9180e62c3c813f521f7653c70d2b4e35b4c91b"} err="failed to get container status \"c21ccbfb5f572f765453f7ec2c9180e62c3c813f521f7653c70d2b4e35b4c91b\": rpc error: code = NotFound desc = could not find container \"c21ccbfb5f572f765453f7ec2c9180e62c3c813f521f7653c70d2b4e35b4c91b\": container with ID starting with c21ccbfb5f572f765453f7ec2c9180e62c3c813f521f7653c70d2b4e35b4c91b not found: ID does not exist" Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.970670 4624 scope.go:117] "RemoveContainer" containerID="237c87217660f28e0949fafd7ca41b19d7a49da86cb0efd2c86f2a281d19f5e8" Oct 08 14:51:26 crc kubenswrapper[4624]: E1008 14:51:26.970892 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"237c87217660f28e0949fafd7ca41b19d7a49da86cb0efd2c86f2a281d19f5e8\": container with ID starting with 237c87217660f28e0949fafd7ca41b19d7a49da86cb0efd2c86f2a281d19f5e8 not found: ID does not exist" containerID="237c87217660f28e0949fafd7ca41b19d7a49da86cb0efd2c86f2a281d19f5e8" Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.970918 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"237c87217660f28e0949fafd7ca41b19d7a49da86cb0efd2c86f2a281d19f5e8"} err="failed to get container status \"237c87217660f28e0949fafd7ca41b19d7a49da86cb0efd2c86f2a281d19f5e8\": rpc error: code = NotFound desc = could not find container \"237c87217660f28e0949fafd7ca41b19d7a49da86cb0efd2c86f2a281d19f5e8\": container with ID starting with 237c87217660f28e0949fafd7ca41b19d7a49da86cb0efd2c86f2a281d19f5e8 not found: ID does not exist" Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.970940 4624 scope.go:117] "RemoveContainer" containerID="cab6c34fb7a8118f73ac22cb5b80414de06253a05c76f76d7e5992de4d386dd8" Oct 08 14:51:26 crc kubenswrapper[4624]: E1008 14:51:26.971173 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cab6c34fb7a8118f73ac22cb5b80414de06253a05c76f76d7e5992de4d386dd8\": container with ID starting with cab6c34fb7a8118f73ac22cb5b80414de06253a05c76f76d7e5992de4d386dd8 not found: ID does not exist" containerID="cab6c34fb7a8118f73ac22cb5b80414de06253a05c76f76d7e5992de4d386dd8" Oct 08 14:51:26 crc kubenswrapper[4624]: I1008 14:51:26.971195 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cab6c34fb7a8118f73ac22cb5b80414de06253a05c76f76d7e5992de4d386dd8"} err="failed to get container status \"cab6c34fb7a8118f73ac22cb5b80414de06253a05c76f76d7e5992de4d386dd8\": rpc error: code = NotFound desc = could not find container \"cab6c34fb7a8118f73ac22cb5b80414de06253a05c76f76d7e5992de4d386dd8\": container with ID starting with cab6c34fb7a8118f73ac22cb5b80414de06253a05c76f76d7e5992de4d386dd8 not found: ID does not exist" Oct 08 14:51:27 crc kubenswrapper[4624]: I1008 14:51:27.465729 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:51:27 crc kubenswrapper[4624]: E1008 14:51:27.466304 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:51:27 crc kubenswrapper[4624]: I1008 14:51:27.476887 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a151b424-a275-4087-9112-d66471d519cd" path="/var/lib/kubelet/pods/a151b424-a275-4087-9112-d66471d519cd/volumes" Oct 08 14:51:36 crc kubenswrapper[4624]: I1008 14:51:36.948153 4624 generic.go:334] "Generic (PLEG): container finished" podID="f0b620ad-209a-49a8-90cd-f4780a2565a3" containerID="85fbd5935494b09f9f9671d31cb3ffadd4d27321e30d2669f08de183f75cc3a6" exitCode=0 Oct 08 14:51:36 crc kubenswrapper[4624]: I1008 14:51:36.948239 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" event={"ID":"f0b620ad-209a-49a8-90cd-f4780a2565a3","Type":"ContainerDied","Data":"85fbd5935494b09f9f9671d31cb3ffadd4d27321e30d2669f08de183f75cc3a6"} Oct 08 14:51:38 crc kubenswrapper[4624]: I1008 14:51:38.423724 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" Oct 08 14:51:38 crc kubenswrapper[4624]: I1008 14:51:38.593445 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsgsn\" (UniqueName: \"kubernetes.io/projected/f0b620ad-209a-49a8-90cd-f4780a2565a3-kube-api-access-fsgsn\") pod \"f0b620ad-209a-49a8-90cd-f4780a2565a3\" (UID: \"f0b620ad-209a-49a8-90cd-f4780a2565a3\") " Oct 08 14:51:38 crc kubenswrapper[4624]: I1008 14:51:38.593565 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b620ad-209a-49a8-90cd-f4780a2565a3-bootstrap-combined-ca-bundle\") pod \"f0b620ad-209a-49a8-90cd-f4780a2565a3\" (UID: \"f0b620ad-209a-49a8-90cd-f4780a2565a3\") " Oct 08 14:51:38 crc kubenswrapper[4624]: I1008 14:51:38.593680 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f0b620ad-209a-49a8-90cd-f4780a2565a3-ssh-key\") pod \"f0b620ad-209a-49a8-90cd-f4780a2565a3\" (UID: \"f0b620ad-209a-49a8-90cd-f4780a2565a3\") " Oct 08 14:51:38 crc kubenswrapper[4624]: I1008 14:51:38.593737 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0b620ad-209a-49a8-90cd-f4780a2565a3-inventory\") pod \"f0b620ad-209a-49a8-90cd-f4780a2565a3\" (UID: \"f0b620ad-209a-49a8-90cd-f4780a2565a3\") " Oct 08 14:51:38 crc kubenswrapper[4624]: I1008 14:51:38.600445 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b620ad-209a-49a8-90cd-f4780a2565a3-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f0b620ad-209a-49a8-90cd-f4780a2565a3" (UID: "f0b620ad-209a-49a8-90cd-f4780a2565a3"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:51:38 crc kubenswrapper[4624]: I1008 14:51:38.606961 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0b620ad-209a-49a8-90cd-f4780a2565a3-kube-api-access-fsgsn" (OuterVolumeSpecName: "kube-api-access-fsgsn") pod "f0b620ad-209a-49a8-90cd-f4780a2565a3" (UID: "f0b620ad-209a-49a8-90cd-f4780a2565a3"). InnerVolumeSpecName "kube-api-access-fsgsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:51:38 crc kubenswrapper[4624]: I1008 14:51:38.626492 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b620ad-209a-49a8-90cd-f4780a2565a3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f0b620ad-209a-49a8-90cd-f4780a2565a3" (UID: "f0b620ad-209a-49a8-90cd-f4780a2565a3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:51:38 crc kubenswrapper[4624]: I1008 14:51:38.628489 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b620ad-209a-49a8-90cd-f4780a2565a3-inventory" (OuterVolumeSpecName: "inventory") pod "f0b620ad-209a-49a8-90cd-f4780a2565a3" (UID: "f0b620ad-209a-49a8-90cd-f4780a2565a3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:51:38 crc kubenswrapper[4624]: I1008 14:51:38.697413 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsgsn\" (UniqueName: \"kubernetes.io/projected/f0b620ad-209a-49a8-90cd-f4780a2565a3-kube-api-access-fsgsn\") on node \"crc\" DevicePath \"\"" Oct 08 14:51:38 crc kubenswrapper[4624]: I1008 14:51:38.697449 4624 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b620ad-209a-49a8-90cd-f4780a2565a3-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:51:38 crc kubenswrapper[4624]: I1008 14:51:38.697461 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f0b620ad-209a-49a8-90cd-f4780a2565a3-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 14:51:38 crc kubenswrapper[4624]: I1008 14:51:38.697472 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0b620ad-209a-49a8-90cd-f4780a2565a3-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 14:51:38 crc kubenswrapper[4624]: I1008 14:51:38.969755 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" event={"ID":"f0b620ad-209a-49a8-90cd-f4780a2565a3","Type":"ContainerDied","Data":"4edc8d09c588cea7b88dfa3719f7a350c952184365ce49e14494f1620c0f780d"} Oct 08 14:51:38 crc kubenswrapper[4624]: I1008 14:51:38.970186 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4edc8d09c588cea7b88dfa3719f7a350c952184365ce49e14494f1620c0f780d" Oct 08 14:51:38 crc kubenswrapper[4624]: I1008 14:51:38.969798 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.124537 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh"] Oct 08 14:51:39 crc kubenswrapper[4624]: E1008 14:51:39.126197 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a151b424-a275-4087-9112-d66471d519cd" containerName="extract-content" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.126227 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a151b424-a275-4087-9112-d66471d519cd" containerName="extract-content" Oct 08 14:51:39 crc kubenswrapper[4624]: E1008 14:51:39.126263 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a151b424-a275-4087-9112-d66471d519cd" containerName="registry-server" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.126273 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a151b424-a275-4087-9112-d66471d519cd" containerName="registry-server" Oct 08 14:51:39 crc kubenswrapper[4624]: E1008 14:51:39.126294 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b620ad-209a-49a8-90cd-f4780a2565a3" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.126305 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b620ad-209a-49a8-90cd-f4780a2565a3" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 08 14:51:39 crc kubenswrapper[4624]: E1008 14:51:39.126329 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a151b424-a275-4087-9112-d66471d519cd" containerName="extract-utilities" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.126338 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a151b424-a275-4087-9112-d66471d519cd" containerName="extract-utilities" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.126832 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0b620ad-209a-49a8-90cd-f4780a2565a3" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.130724 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a151b424-a275-4087-9112-d66471d519cd" containerName="registry-server" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.132593 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.137786 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.138099 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.138204 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.138337 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v6sxt" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.144077 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh"] Oct 08 14:51:39 crc kubenswrapper[4624]: E1008 14:51:39.185822 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0b620ad_209a_49a8_90cd_f4780a2565a3.slice/crio-4edc8d09c588cea7b88dfa3719f7a350c952184365ce49e14494f1620c0f780d\": RecentStats: unable to find data in memory cache]" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.315205 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b35b91af-b986-47f5-a444-bc20763e34ed-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z8msh\" (UID: \"b35b91af-b986-47f5-a444-bc20763e34ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.315606 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b35b91af-b986-47f5-a444-bc20763e34ed-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z8msh\" (UID: \"b35b91af-b986-47f5-a444-bc20763e34ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.315741 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7gbc\" (UniqueName: \"kubernetes.io/projected/b35b91af-b986-47f5-a444-bc20763e34ed-kube-api-access-p7gbc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z8msh\" (UID: \"b35b91af-b986-47f5-a444-bc20763e34ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.417838 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b35b91af-b986-47f5-a444-bc20763e34ed-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z8msh\" (UID: \"b35b91af-b986-47f5-a444-bc20763e34ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.417950 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b35b91af-b986-47f5-a444-bc20763e34ed-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z8msh\" (UID: \"b35b91af-b986-47f5-a444-bc20763e34ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.418104 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7gbc\" (UniqueName: \"kubernetes.io/projected/b35b91af-b986-47f5-a444-bc20763e34ed-kube-api-access-p7gbc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z8msh\" (UID: \"b35b91af-b986-47f5-a444-bc20763e34ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.425920 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b35b91af-b986-47f5-a444-bc20763e34ed-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z8msh\" (UID: \"b35b91af-b986-47f5-a444-bc20763e34ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.426486 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b35b91af-b986-47f5-a444-bc20763e34ed-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z8msh\" (UID: \"b35b91af-b986-47f5-a444-bc20763e34ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.437390 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7gbc\" (UniqueName: \"kubernetes.io/projected/b35b91af-b986-47f5-a444-bc20763e34ed-kube-api-access-p7gbc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-z8msh\" (UID: \"b35b91af-b986-47f5-a444-bc20763e34ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh" Oct 08 14:51:39 crc kubenswrapper[4624]: I1008 14:51:39.457275 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh" Oct 08 14:51:40 crc kubenswrapper[4624]: I1008 14:51:40.066128 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh"] Oct 08 14:51:40 crc kubenswrapper[4624]: I1008 14:51:40.992726 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh" event={"ID":"b35b91af-b986-47f5-a444-bc20763e34ed","Type":"ContainerStarted","Data":"d77a38bb10f9820a4dd177bebb6d98e1decf07671a67f65b6277ddd128ec6336"} Oct 08 14:51:41 crc kubenswrapper[4624]: I1008 14:51:41.044799 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-z5qqs"] Oct 08 14:51:41 crc kubenswrapper[4624]: I1008 14:51:41.053393 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-z5qqs"] Oct 08 14:51:41 crc kubenswrapper[4624]: I1008 14:51:41.466453 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:51:41 crc kubenswrapper[4624]: E1008 14:51:41.467060 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:51:41 crc kubenswrapper[4624]: I1008 14:51:41.480761 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="791f20b1-069c-4d9d-b8ed-eb1330a191f8" path="/var/lib/kubelet/pods/791f20b1-069c-4d9d-b8ed-eb1330a191f8/volumes" Oct 08 14:51:42 crc kubenswrapper[4624]: I1008 14:51:42.003838 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh" event={"ID":"b35b91af-b986-47f5-a444-bc20763e34ed","Type":"ContainerStarted","Data":"7c2a67772ce8bdc25e8619b86a23cde5356446475733a932cf8bc16633361003"} Oct 08 14:51:42 crc kubenswrapper[4624]: I1008 14:51:42.026897 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh" podStartSLOduration=2.375710676 podStartE2EDuration="3.026866173s" podCreationTimestamp="2025-10-08 14:51:39 +0000 UTC" firstStartedPulling="2025-10-08 14:51:40.075275959 +0000 UTC m=+1725.226211036" lastFinishedPulling="2025-10-08 14:51:40.726431456 +0000 UTC m=+1725.877366533" observedRunningTime="2025-10-08 14:51:42.024026531 +0000 UTC m=+1727.174961618" watchObservedRunningTime="2025-10-08 14:51:42.026866173 +0000 UTC m=+1727.177801250" Oct 08 14:51:44 crc kubenswrapper[4624]: I1008 14:51:44.040062 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-lfgwx"] Oct 08 14:51:44 crc kubenswrapper[4624]: I1008 14:51:44.052977 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-lfgwx"] Oct 08 14:51:45 crc kubenswrapper[4624]: I1008 14:51:45.032118 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-xkc5w"] Oct 08 14:51:45 crc kubenswrapper[4624]: I1008 14:51:45.044036 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-xkc5w"] Oct 08 14:51:45 crc kubenswrapper[4624]: I1008 14:51:45.476685 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d554826-ab8b-40f6-9be4-ad2949010968" path="/var/lib/kubelet/pods/1d554826-ab8b-40f6-9be4-ad2949010968/volumes" Oct 08 14:51:45 crc kubenswrapper[4624]: I1008 14:51:45.479266 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0efe065-02ed-472d-b560-6ddfcee095c4" path="/var/lib/kubelet/pods/a0efe065-02ed-472d-b560-6ddfcee095c4/volumes" Oct 08 14:51:49 crc kubenswrapper[4624]: I1008 14:51:49.034076 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tg77p"] Oct 08 14:51:49 crc kubenswrapper[4624]: I1008 14:51:49.047834 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tg77p"] Oct 08 14:51:49 crc kubenswrapper[4624]: I1008 14:51:49.480095 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b93b6b-3915-45f1-9f70-b186b5e7ed31" path="/var/lib/kubelet/pods/c7b93b6b-3915-45f1-9f70-b186b5e7ed31/volumes" Oct 08 14:51:55 crc kubenswrapper[4624]: I1008 14:51:55.465927 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:51:55 crc kubenswrapper[4624]: E1008 14:51:55.466684 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:52:10 crc kubenswrapper[4624]: I1008 14:52:10.465917 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:52:10 crc kubenswrapper[4624]: E1008 14:52:10.466824 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:52:11 crc kubenswrapper[4624]: I1008 14:52:11.030293 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-f6sz5"] Oct 08 14:52:11 crc kubenswrapper[4624]: I1008 14:52:11.038323 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-hv6dc"] Oct 08 14:52:11 crc kubenswrapper[4624]: I1008 14:52:11.046157 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-t4www"] Oct 08 14:52:11 crc kubenswrapper[4624]: I1008 14:52:11.055321 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-f6sz5"] Oct 08 14:52:11 crc kubenswrapper[4624]: I1008 14:52:11.064601 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-t4www"] Oct 08 14:52:11 crc kubenswrapper[4624]: I1008 14:52:11.073903 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-hv6dc"] Oct 08 14:52:11 crc kubenswrapper[4624]: I1008 14:52:11.479536 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="265c0058-98f4-4bcd-b413-8e3633ab56cd" path="/var/lib/kubelet/pods/265c0058-98f4-4bcd-b413-8e3633ab56cd/volumes" Oct 08 14:52:11 crc kubenswrapper[4624]: I1008 14:52:11.481678 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7222761d-c17c-485d-a672-75d7921fbb20" path="/var/lib/kubelet/pods/7222761d-c17c-485d-a672-75d7921fbb20/volumes" Oct 08 14:52:11 crc kubenswrapper[4624]: I1008 14:52:11.484520 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd53e103-7b25-4f61-a0f4-675ace133ab7" path="/var/lib/kubelet/pods/cd53e103-7b25-4f61-a0f4-675ace133ab7/volumes" Oct 08 14:52:23 crc kubenswrapper[4624]: I1008 14:52:23.468698 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:52:23 crc kubenswrapper[4624]: E1008 14:52:23.469883 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:52:36 crc kubenswrapper[4624]: I1008 14:52:36.471222 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:52:36 crc kubenswrapper[4624]: E1008 14:52:36.472358 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:52:40 crc kubenswrapper[4624]: I1008 14:52:40.484088 4624 scope.go:117] "RemoveContainer" containerID="9a1f22539bbc52f0ddbbfd0cc3714ee557621e3905749c9563270ac682934197" Oct 08 14:52:40 crc kubenswrapper[4624]: I1008 14:52:40.532603 4624 scope.go:117] "RemoveContainer" containerID="bfa5c132440552840d9c1f0caba4afb6694790cb6c8ae77b3891993bf5eaadd1" Oct 08 14:52:40 crc kubenswrapper[4624]: I1008 14:52:40.570054 4624 scope.go:117] "RemoveContainer" containerID="ae360f24965e6b88c9e5bf73133b74fcc49d6e8fa04ad5e3ccea87515c728cdd" Oct 08 14:52:40 crc kubenswrapper[4624]: I1008 14:52:40.655542 4624 scope.go:117] "RemoveContainer" containerID="cbeef4aaa873d131e498e5fb3c1570eac0ea1b83b1be4af86ca4727e8e2f03e3" Oct 08 14:52:40 crc kubenswrapper[4624]: I1008 14:52:40.724523 4624 scope.go:117] "RemoveContainer" containerID="b74a347620894560b1e82eccb1b9bbc6d9147ead99e26945052b765fd61fe8bb" Oct 08 14:52:40 crc kubenswrapper[4624]: I1008 14:52:40.774032 4624 scope.go:117] "RemoveContainer" containerID="c8dc6fa2e321c8d676e138cd68e8b4c3dc94c62c7e7783228df72b3f3ed310bf" Oct 08 14:52:40 crc kubenswrapper[4624]: I1008 14:52:40.820650 4624 scope.go:117] "RemoveContainer" containerID="33e4a671d890907137a641308f4b91d42becd83533d10f46fe992a0b7073f5be" Oct 08 14:52:48 crc kubenswrapper[4624]: I1008 14:52:48.466205 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:52:48 crc kubenswrapper[4624]: E1008 14:52:48.467129 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:53:03 crc kubenswrapper[4624]: I1008 14:53:03.467151 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:53:03 crc kubenswrapper[4624]: E1008 14:53:03.468081 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:53:17 crc kubenswrapper[4624]: I1008 14:53:17.466746 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:53:17 crc kubenswrapper[4624]: E1008 14:53:17.467487 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:53:19 crc kubenswrapper[4624]: I1008 14:53:19.041094 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-clh26"] Oct 08 14:53:19 crc kubenswrapper[4624]: I1008 14:53:19.049875 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-clh26"] Oct 08 14:53:19 crc kubenswrapper[4624]: I1008 14:53:19.476943 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e0de22c-50a2-4388-b13e-ff3935165e8b" path="/var/lib/kubelet/pods/8e0de22c-50a2-4388-b13e-ff3935165e8b/volumes" Oct 08 14:53:20 crc kubenswrapper[4624]: I1008 14:53:20.031600 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-dcjsw"] Oct 08 14:53:20 crc kubenswrapper[4624]: I1008 14:53:20.042522 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-dcjsw"] Oct 08 14:53:20 crc kubenswrapper[4624]: I1008 14:53:20.052242 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-4cqc4"] Oct 08 14:53:20 crc kubenswrapper[4624]: I1008 14:53:20.058914 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-4cqc4"] Oct 08 14:53:21 crc kubenswrapper[4624]: I1008 14:53:21.476478 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fc96968-af4b-45f5-8d90-b4866b4029fe" path="/var/lib/kubelet/pods/4fc96968-af4b-45f5-8d90-b4866b4029fe/volumes" Oct 08 14:53:21 crc kubenswrapper[4624]: I1008 14:53:21.477403 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbe86e77-ec3a-4807-9420-e402d309dc89" path="/var/lib/kubelet/pods/bbe86e77-ec3a-4807-9420-e402d309dc89/volumes" Oct 08 14:53:28 crc kubenswrapper[4624]: I1008 14:53:28.897403 4624 generic.go:334] "Generic (PLEG): container finished" podID="b35b91af-b986-47f5-a444-bc20763e34ed" containerID="7c2a67772ce8bdc25e8619b86a23cde5356446475733a932cf8bc16633361003" exitCode=0 Oct 08 14:53:28 crc kubenswrapper[4624]: I1008 14:53:28.897434 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh" event={"ID":"b35b91af-b986-47f5-a444-bc20763e34ed","Type":"ContainerDied","Data":"7c2a67772ce8bdc25e8619b86a23cde5356446475733a932cf8bc16633361003"} Oct 08 14:53:29 crc kubenswrapper[4624]: I1008 14:53:29.025522 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4d3c-account-create-jcsvm"] Oct 08 14:53:29 crc kubenswrapper[4624]: I1008 14:53:29.033053 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-4d3c-account-create-jcsvm"] Oct 08 14:53:29 crc kubenswrapper[4624]: I1008 14:53:29.466437 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:53:29 crc kubenswrapper[4624]: E1008 14:53:29.466793 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:53:29 crc kubenswrapper[4624]: I1008 14:53:29.478277 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="348ff6b7-b99d-45ee-91a2-f680c29ae8f3" path="/var/lib/kubelet/pods/348ff6b7-b99d-45ee-91a2-f680c29ae8f3/volumes" Oct 08 14:53:30 crc kubenswrapper[4624]: I1008 14:53:30.032540 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-077a-account-create-sfqj9"] Oct 08 14:53:30 crc kubenswrapper[4624]: I1008 14:53:30.044466 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-653b-account-create-p9cxk"] Oct 08 14:53:30 crc kubenswrapper[4624]: I1008 14:53:30.064142 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-077a-account-create-sfqj9"] Oct 08 14:53:30 crc kubenswrapper[4624]: I1008 14:53:30.071796 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-653b-account-create-p9cxk"] Oct 08 14:53:30 crc kubenswrapper[4624]: I1008 14:53:30.365886 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh" Oct 08 14:53:30 crc kubenswrapper[4624]: I1008 14:53:30.394734 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7gbc\" (UniqueName: \"kubernetes.io/projected/b35b91af-b986-47f5-a444-bc20763e34ed-kube-api-access-p7gbc\") pod \"b35b91af-b986-47f5-a444-bc20763e34ed\" (UID: \"b35b91af-b986-47f5-a444-bc20763e34ed\") " Oct 08 14:53:30 crc kubenswrapper[4624]: I1008 14:53:30.394790 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b35b91af-b986-47f5-a444-bc20763e34ed-inventory\") pod \"b35b91af-b986-47f5-a444-bc20763e34ed\" (UID: \"b35b91af-b986-47f5-a444-bc20763e34ed\") " Oct 08 14:53:30 crc kubenswrapper[4624]: I1008 14:53:30.394968 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b35b91af-b986-47f5-a444-bc20763e34ed-ssh-key\") pod \"b35b91af-b986-47f5-a444-bc20763e34ed\" (UID: \"b35b91af-b986-47f5-a444-bc20763e34ed\") " Oct 08 14:53:30 crc kubenswrapper[4624]: I1008 14:53:30.401550 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b35b91af-b986-47f5-a444-bc20763e34ed-kube-api-access-p7gbc" (OuterVolumeSpecName: "kube-api-access-p7gbc") pod "b35b91af-b986-47f5-a444-bc20763e34ed" (UID: "b35b91af-b986-47f5-a444-bc20763e34ed"). InnerVolumeSpecName "kube-api-access-p7gbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:53:30 crc kubenswrapper[4624]: I1008 14:53:30.430609 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b35b91af-b986-47f5-a444-bc20763e34ed-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b35b91af-b986-47f5-a444-bc20763e34ed" (UID: "b35b91af-b986-47f5-a444-bc20763e34ed"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:53:30 crc kubenswrapper[4624]: I1008 14:53:30.430905 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b35b91af-b986-47f5-a444-bc20763e34ed-inventory" (OuterVolumeSpecName: "inventory") pod "b35b91af-b986-47f5-a444-bc20763e34ed" (UID: "b35b91af-b986-47f5-a444-bc20763e34ed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:53:30 crc kubenswrapper[4624]: I1008 14:53:30.498262 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7gbc\" (UniqueName: \"kubernetes.io/projected/b35b91af-b986-47f5-a444-bc20763e34ed-kube-api-access-p7gbc\") on node \"crc\" DevicePath \"\"" Oct 08 14:53:30 crc kubenswrapper[4624]: I1008 14:53:30.498294 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b35b91af-b986-47f5-a444-bc20763e34ed-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 14:53:30 crc kubenswrapper[4624]: I1008 14:53:30.498303 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b35b91af-b986-47f5-a444-bc20763e34ed-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 14:53:30 crc kubenswrapper[4624]: I1008 14:53:30.917508 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh" event={"ID":"b35b91af-b986-47f5-a444-bc20763e34ed","Type":"ContainerDied","Data":"d77a38bb10f9820a4dd177bebb6d98e1decf07671a67f65b6277ddd128ec6336"} Oct 08 14:53:30 crc kubenswrapper[4624]: I1008 14:53:30.917862 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d77a38bb10f9820a4dd177bebb6d98e1decf07671a67f65b6277ddd128ec6336" Oct 08 14:53:30 crc kubenswrapper[4624]: I1008 14:53:30.917829 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-z8msh" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.005058 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl"] Oct 08 14:53:31 crc kubenswrapper[4624]: E1008 14:53:31.005456 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b35b91af-b986-47f5-a444-bc20763e34ed" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.005474 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="b35b91af-b986-47f5-a444-bc20763e34ed" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.005689 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="b35b91af-b986-47f5-a444-bc20763e34ed" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.006431 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.009893 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.010297 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.010354 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v6sxt" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.010499 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.026867 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl"] Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.110733 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/968537d8-1190-479e-a4cc-92054923d08a-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl\" (UID: \"968537d8-1190-479e-a4cc-92054923d08a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.110860 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/968537d8-1190-479e-a4cc-92054923d08a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl\" (UID: \"968537d8-1190-479e-a4cc-92054923d08a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.111137 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bl6m\" (UniqueName: \"kubernetes.io/projected/968537d8-1190-479e-a4cc-92054923d08a-kube-api-access-5bl6m\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl\" (UID: \"968537d8-1190-479e-a4cc-92054923d08a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.212935 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bl6m\" (UniqueName: \"kubernetes.io/projected/968537d8-1190-479e-a4cc-92054923d08a-kube-api-access-5bl6m\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl\" (UID: \"968537d8-1190-479e-a4cc-92054923d08a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.213056 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/968537d8-1190-479e-a4cc-92054923d08a-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl\" (UID: \"968537d8-1190-479e-a4cc-92054923d08a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.213115 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/968537d8-1190-479e-a4cc-92054923d08a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl\" (UID: \"968537d8-1190-479e-a4cc-92054923d08a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.224899 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/968537d8-1190-479e-a4cc-92054923d08a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl\" (UID: \"968537d8-1190-479e-a4cc-92054923d08a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.228377 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/968537d8-1190-479e-a4cc-92054923d08a-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl\" (UID: \"968537d8-1190-479e-a4cc-92054923d08a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.242305 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bl6m\" (UniqueName: \"kubernetes.io/projected/968537d8-1190-479e-a4cc-92054923d08a-kube-api-access-5bl6m\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl\" (UID: \"968537d8-1190-479e-a4cc-92054923d08a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.325990 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.484285 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0191b5d5-f9f7-49b3-8732-a35447adf088" path="/var/lib/kubelet/pods/0191b5d5-f9f7-49b3-8732-a35447adf088/volumes" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.485592 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f36a442-e41e-472f-9c08-bc93c70c4fb5" path="/var/lib/kubelet/pods/2f36a442-e41e-472f-9c08-bc93c70c4fb5/volumes" Oct 08 14:53:31 crc kubenswrapper[4624]: I1008 14:53:31.940606 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl"] Oct 08 14:53:32 crc kubenswrapper[4624]: I1008 14:53:32.941919 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl" event={"ID":"968537d8-1190-479e-a4cc-92054923d08a","Type":"ContainerStarted","Data":"f07b1a260b20c97d7b4e314d79697fa875fc6818d964564d900804351aeb1f49"} Oct 08 14:53:32 crc kubenswrapper[4624]: I1008 14:53:32.942556 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl" event={"ID":"968537d8-1190-479e-a4cc-92054923d08a","Type":"ContainerStarted","Data":"f959490b4dfdf0451ff4a8b63192aadb3696bf216c7d534f4b424fcb8697cccd"} Oct 08 14:53:32 crc kubenswrapper[4624]: I1008 14:53:32.960199 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl" podStartSLOduration=2.396617784 podStartE2EDuration="2.96017934s" podCreationTimestamp="2025-10-08 14:53:30 +0000 UTC" firstStartedPulling="2025-10-08 14:53:31.955479278 +0000 UTC m=+1837.106414355" lastFinishedPulling="2025-10-08 14:53:32.519040834 +0000 UTC m=+1837.669975911" observedRunningTime="2025-10-08 14:53:32.959840751 +0000 UTC m=+1838.110775828" watchObservedRunningTime="2025-10-08 14:53:32.96017934 +0000 UTC m=+1838.111114437" Oct 08 14:53:40 crc kubenswrapper[4624]: I1008 14:53:40.466523 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:53:40 crc kubenswrapper[4624]: E1008 14:53:40.467393 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:53:41 crc kubenswrapper[4624]: I1008 14:53:41.018242 4624 scope.go:117] "RemoveContainer" containerID="836668c3044f019a13428b7c757760be148b1a14b579562570e0d69dd0c2bf9b" Oct 08 14:53:41 crc kubenswrapper[4624]: I1008 14:53:41.045381 4624 scope.go:117] "RemoveContainer" containerID="e7580b7db9931c4df2af3cf55e6073bdf9da5553f2887b5f903edc26ecaa27c3" Oct 08 14:53:41 crc kubenswrapper[4624]: I1008 14:53:41.093220 4624 scope.go:117] "RemoveContainer" containerID="c92d14841cba9845cde9d9fc900bf900e9aeea3e6c7f45363b23f09342cb30f9" Oct 08 14:53:41 crc kubenswrapper[4624]: I1008 14:53:41.142501 4624 scope.go:117] "RemoveContainer" containerID="8d7fd75808e5f17a6866c23ce3154e1b3a6924ce6931b4905d408e577b1c0d53" Oct 08 14:53:41 crc kubenswrapper[4624]: I1008 14:53:41.201725 4624 scope.go:117] "RemoveContainer" containerID="78d5e1b581cb301339174eea098474f59f6c2af43fbc9a790b85c45ea2cc00e2" Oct 08 14:53:41 crc kubenswrapper[4624]: I1008 14:53:41.256806 4624 scope.go:117] "RemoveContainer" containerID="1a43a24af53bebd6d486446158327fa5e7d4c116fc47acd313af065951398d6c" Oct 08 14:53:55 crc kubenswrapper[4624]: I1008 14:53:55.481469 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:53:55 crc kubenswrapper[4624]: E1008 14:53:55.482694 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 14:54:07 crc kubenswrapper[4624]: I1008 14:54:07.045749 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kfw4n"] Oct 08 14:54:07 crc kubenswrapper[4624]: I1008 14:54:07.054399 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kfw4n"] Oct 08 14:54:07 crc kubenswrapper[4624]: I1008 14:54:07.465813 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:54:07 crc kubenswrapper[4624]: I1008 14:54:07.479954 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c299418-f575-43a5-9a75-9fe358a8770c" path="/var/lib/kubelet/pods/9c299418-f575-43a5-9a75-9fe358a8770c/volumes" Oct 08 14:54:08 crc kubenswrapper[4624]: I1008 14:54:08.240354 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"5ea1ca72b798c395b4e373d87fbab7799599872cd0c0147a7af45b5d45ba60f8"} Oct 08 14:54:36 crc kubenswrapper[4624]: I1008 14:54:36.051994 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-8rj4w"] Oct 08 14:54:36 crc kubenswrapper[4624]: I1008 14:54:36.066579 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-8rj4w"] Oct 08 14:54:37 crc kubenswrapper[4624]: I1008 14:54:37.478894 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae634848-a6be-4622-96b1-36c4f0427893" path="/var/lib/kubelet/pods/ae634848-a6be-4622-96b1-36c4f0427893/volumes" Oct 08 14:54:41 crc kubenswrapper[4624]: I1008 14:54:41.433540 4624 scope.go:117] "RemoveContainer" containerID="f2328f4147a97e2e03d95b2e678a127ef5b569676927a37b5563be8b91393291" Oct 08 14:54:41 crc kubenswrapper[4624]: I1008 14:54:41.482549 4624 scope.go:117] "RemoveContainer" containerID="05cb4dbb677de3c19c6c631e66dde8b2e5fbfd3a072fc16231bd274e016aef9b" Oct 08 14:54:45 crc kubenswrapper[4624]: I1008 14:54:45.043292 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t62tf"] Oct 08 14:54:45 crc kubenswrapper[4624]: I1008 14:54:45.053817 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t62tf"] Oct 08 14:54:45 crc kubenswrapper[4624]: I1008 14:54:45.477411 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e779391-b6f0-47f2-b670-0267f571c9ff" path="/var/lib/kubelet/pods/5e779391-b6f0-47f2-b670-0267f571c9ff/volumes" Oct 08 14:54:57 crc kubenswrapper[4624]: I1008 14:54:57.704603 4624 generic.go:334] "Generic (PLEG): container finished" podID="968537d8-1190-479e-a4cc-92054923d08a" containerID="f07b1a260b20c97d7b4e314d79697fa875fc6818d964564d900804351aeb1f49" exitCode=0 Oct 08 14:54:57 crc kubenswrapper[4624]: I1008 14:54:57.704708 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl" event={"ID":"968537d8-1190-479e-a4cc-92054923d08a","Type":"ContainerDied","Data":"f07b1a260b20c97d7b4e314d79697fa875fc6818d964564d900804351aeb1f49"} Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.139029 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.292268 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/968537d8-1190-479e-a4cc-92054923d08a-ssh-key\") pod \"968537d8-1190-479e-a4cc-92054923d08a\" (UID: \"968537d8-1190-479e-a4cc-92054923d08a\") " Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.292526 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bl6m\" (UniqueName: \"kubernetes.io/projected/968537d8-1190-479e-a4cc-92054923d08a-kube-api-access-5bl6m\") pod \"968537d8-1190-479e-a4cc-92054923d08a\" (UID: \"968537d8-1190-479e-a4cc-92054923d08a\") " Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.292710 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/968537d8-1190-479e-a4cc-92054923d08a-inventory\") pod \"968537d8-1190-479e-a4cc-92054923d08a\" (UID: \"968537d8-1190-479e-a4cc-92054923d08a\") " Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.297733 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/968537d8-1190-479e-a4cc-92054923d08a-kube-api-access-5bl6m" (OuterVolumeSpecName: "kube-api-access-5bl6m") pod "968537d8-1190-479e-a4cc-92054923d08a" (UID: "968537d8-1190-479e-a4cc-92054923d08a"). InnerVolumeSpecName "kube-api-access-5bl6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.372833 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/968537d8-1190-479e-a4cc-92054923d08a-inventory" (OuterVolumeSpecName: "inventory") pod "968537d8-1190-479e-a4cc-92054923d08a" (UID: "968537d8-1190-479e-a4cc-92054923d08a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.373254 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/968537d8-1190-479e-a4cc-92054923d08a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "968537d8-1190-479e-a4cc-92054923d08a" (UID: "968537d8-1190-479e-a4cc-92054923d08a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.394810 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/968537d8-1190-479e-a4cc-92054923d08a-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.394859 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bl6m\" (UniqueName: \"kubernetes.io/projected/968537d8-1190-479e-a4cc-92054923d08a-kube-api-access-5bl6m\") on node \"crc\" DevicePath \"\"" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.394871 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/968537d8-1190-479e-a4cc-92054923d08a-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.721083 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl" event={"ID":"968537d8-1190-479e-a4cc-92054923d08a","Type":"ContainerDied","Data":"f959490b4dfdf0451ff4a8b63192aadb3696bf216c7d534f4b424fcb8697cccd"} Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.721125 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f959490b4dfdf0451ff4a8b63192aadb3696bf216c7d534f4b424fcb8697cccd" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.721167 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.814142 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght"] Oct 08 14:54:59 crc kubenswrapper[4624]: E1008 14:54:59.814828 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968537d8-1190-479e-a4cc-92054923d08a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.814851 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="968537d8-1190-479e-a4cc-92054923d08a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.815118 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="968537d8-1190-479e-a4cc-92054923d08a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.815984 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.817966 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.818316 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.818584 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v6sxt" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.819087 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.831171 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght"] Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.903654 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ceac8d98-9a63-4c7d-876b-8d7e4acf59c4-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cbght\" (UID: \"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.903729 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceac8d98-9a63-4c7d-876b-8d7e4acf59c4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cbght\" (UID: \"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght" Oct 08 14:54:59 crc kubenswrapper[4624]: I1008 14:54:59.903790 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76dp7\" (UniqueName: \"kubernetes.io/projected/ceac8d98-9a63-4c7d-876b-8d7e4acf59c4-kube-api-access-76dp7\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cbght\" (UID: \"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght" Oct 08 14:55:00 crc kubenswrapper[4624]: I1008 14:55:00.006566 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ceac8d98-9a63-4c7d-876b-8d7e4acf59c4-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cbght\" (UID: \"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght" Oct 08 14:55:00 crc kubenswrapper[4624]: I1008 14:55:00.007235 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceac8d98-9a63-4c7d-876b-8d7e4acf59c4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cbght\" (UID: \"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght" Oct 08 14:55:00 crc kubenswrapper[4624]: I1008 14:55:00.007314 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76dp7\" (UniqueName: \"kubernetes.io/projected/ceac8d98-9a63-4c7d-876b-8d7e4acf59c4-kube-api-access-76dp7\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cbght\" (UID: \"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght" Oct 08 14:55:00 crc kubenswrapper[4624]: I1008 14:55:00.012527 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceac8d98-9a63-4c7d-876b-8d7e4acf59c4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cbght\" (UID: \"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght" Oct 08 14:55:00 crc kubenswrapper[4624]: I1008 14:55:00.012735 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ceac8d98-9a63-4c7d-876b-8d7e4acf59c4-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cbght\" (UID: \"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght" Oct 08 14:55:00 crc kubenswrapper[4624]: I1008 14:55:00.028998 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76dp7\" (UniqueName: \"kubernetes.io/projected/ceac8d98-9a63-4c7d-876b-8d7e4acf59c4-kube-api-access-76dp7\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cbght\" (UID: \"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght" Oct 08 14:55:00 crc kubenswrapper[4624]: I1008 14:55:00.133132 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght" Oct 08 14:55:00 crc kubenswrapper[4624]: I1008 14:55:00.687527 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght"] Oct 08 14:55:00 crc kubenswrapper[4624]: I1008 14:55:00.739811 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght" event={"ID":"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4","Type":"ContainerStarted","Data":"9f0ba788a5bf2aee2918c6446e16f6ecf391b160f0389d1e99da6a1aab84db82"} Oct 08 14:55:01 crc kubenswrapper[4624]: I1008 14:55:01.766043 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght" event={"ID":"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4","Type":"ContainerStarted","Data":"8b8d83a2295c6dfa55f63f4c4adac2b7c9d88b0a51314ae569312068278a0862"} Oct 08 14:55:01 crc kubenswrapper[4624]: I1008 14:55:01.798833 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght" podStartSLOduration=2.319564595 podStartE2EDuration="2.798815135s" podCreationTimestamp="2025-10-08 14:54:59 +0000 UTC" firstStartedPulling="2025-10-08 14:55:00.70345686 +0000 UTC m=+1925.854391937" lastFinishedPulling="2025-10-08 14:55:01.18270741 +0000 UTC m=+1926.333642477" observedRunningTime="2025-10-08 14:55:01.794831041 +0000 UTC m=+1926.945766118" watchObservedRunningTime="2025-10-08 14:55:01.798815135 +0000 UTC m=+1926.949750212" Oct 08 14:55:06 crc kubenswrapper[4624]: I1008 14:55:06.818092 4624 generic.go:334] "Generic (PLEG): container finished" podID="ceac8d98-9a63-4c7d-876b-8d7e4acf59c4" containerID="8b8d83a2295c6dfa55f63f4c4adac2b7c9d88b0a51314ae569312068278a0862" exitCode=0 Oct 08 14:55:06 crc kubenswrapper[4624]: I1008 14:55:06.818210 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght" event={"ID":"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4","Type":"ContainerDied","Data":"8b8d83a2295c6dfa55f63f4c4adac2b7c9d88b0a51314ae569312068278a0862"} Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.256723 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght" Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.380313 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceac8d98-9a63-4c7d-876b-8d7e4acf59c4-inventory\") pod \"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4\" (UID: \"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4\") " Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.381702 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ceac8d98-9a63-4c7d-876b-8d7e4acf59c4-ssh-key\") pod \"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4\" (UID: \"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4\") " Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.381761 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76dp7\" (UniqueName: \"kubernetes.io/projected/ceac8d98-9a63-4c7d-876b-8d7e4acf59c4-kube-api-access-76dp7\") pod \"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4\" (UID: \"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4\") " Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.388911 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceac8d98-9a63-4c7d-876b-8d7e4acf59c4-kube-api-access-76dp7" (OuterVolumeSpecName: "kube-api-access-76dp7") pod "ceac8d98-9a63-4c7d-876b-8d7e4acf59c4" (UID: "ceac8d98-9a63-4c7d-876b-8d7e4acf59c4"). InnerVolumeSpecName "kube-api-access-76dp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.417722 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceac8d98-9a63-4c7d-876b-8d7e4acf59c4-inventory" (OuterVolumeSpecName: "inventory") pod "ceac8d98-9a63-4c7d-876b-8d7e4acf59c4" (UID: "ceac8d98-9a63-4c7d-876b-8d7e4acf59c4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.418476 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceac8d98-9a63-4c7d-876b-8d7e4acf59c4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ceac8d98-9a63-4c7d-876b-8d7e4acf59c4" (UID: "ceac8d98-9a63-4c7d-876b-8d7e4acf59c4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.484210 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76dp7\" (UniqueName: \"kubernetes.io/projected/ceac8d98-9a63-4c7d-876b-8d7e4acf59c4-kube-api-access-76dp7\") on node \"crc\" DevicePath \"\"" Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.484261 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceac8d98-9a63-4c7d-876b-8d7e4acf59c4-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.484277 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ceac8d98-9a63-4c7d-876b-8d7e4acf59c4-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.838535 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght" event={"ID":"ceac8d98-9a63-4c7d-876b-8d7e4acf59c4","Type":"ContainerDied","Data":"9f0ba788a5bf2aee2918c6446e16f6ecf391b160f0389d1e99da6a1aab84db82"} Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.838577 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f0ba788a5bf2aee2918c6446e16f6ecf391b160f0389d1e99da6a1aab84db82" Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.838851 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cbght" Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.929626 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57"] Oct 08 14:55:08 crc kubenswrapper[4624]: E1008 14:55:08.930099 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceac8d98-9a63-4c7d-876b-8d7e4acf59c4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.930123 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceac8d98-9a63-4c7d-876b-8d7e4acf59c4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.930362 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceac8d98-9a63-4c7d-876b-8d7e4acf59c4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.931208 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57" Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.939232 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.939581 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.941623 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.942673 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v6sxt" Oct 08 14:55:08 crc kubenswrapper[4624]: I1008 14:55:08.954535 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57"] Oct 08 14:55:09 crc kubenswrapper[4624]: I1008 14:55:09.005353 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43155da2-e389-481e-8e9d-8c219482ba50-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fmk57\" (UID: \"43155da2-e389-481e-8e9d-8c219482ba50\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57" Oct 08 14:55:09 crc kubenswrapper[4624]: I1008 14:55:09.005490 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43155da2-e389-481e-8e9d-8c219482ba50-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fmk57\" (UID: \"43155da2-e389-481e-8e9d-8c219482ba50\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57" Oct 08 14:55:09 crc kubenswrapper[4624]: I1008 14:55:09.005515 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn75g\" (UniqueName: \"kubernetes.io/projected/43155da2-e389-481e-8e9d-8c219482ba50-kube-api-access-hn75g\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fmk57\" (UID: \"43155da2-e389-481e-8e9d-8c219482ba50\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57" Oct 08 14:55:09 crc kubenswrapper[4624]: I1008 14:55:09.107662 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43155da2-e389-481e-8e9d-8c219482ba50-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fmk57\" (UID: \"43155da2-e389-481e-8e9d-8c219482ba50\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57" Oct 08 14:55:09 crc kubenswrapper[4624]: I1008 14:55:09.107728 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn75g\" (UniqueName: \"kubernetes.io/projected/43155da2-e389-481e-8e9d-8c219482ba50-kube-api-access-hn75g\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fmk57\" (UID: \"43155da2-e389-481e-8e9d-8c219482ba50\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57" Oct 08 14:55:09 crc kubenswrapper[4624]: I1008 14:55:09.107851 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43155da2-e389-481e-8e9d-8c219482ba50-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fmk57\" (UID: \"43155da2-e389-481e-8e9d-8c219482ba50\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57" Oct 08 14:55:09 crc kubenswrapper[4624]: I1008 14:55:09.113541 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43155da2-e389-481e-8e9d-8c219482ba50-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fmk57\" (UID: \"43155da2-e389-481e-8e9d-8c219482ba50\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57" Oct 08 14:55:09 crc kubenswrapper[4624]: I1008 14:55:09.113571 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43155da2-e389-481e-8e9d-8c219482ba50-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fmk57\" (UID: \"43155da2-e389-481e-8e9d-8c219482ba50\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57" Oct 08 14:55:09 crc kubenswrapper[4624]: I1008 14:55:09.128165 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn75g\" (UniqueName: \"kubernetes.io/projected/43155da2-e389-481e-8e9d-8c219482ba50-kube-api-access-hn75g\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fmk57\" (UID: \"43155da2-e389-481e-8e9d-8c219482ba50\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57" Oct 08 14:55:09 crc kubenswrapper[4624]: I1008 14:55:09.259563 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57" Oct 08 14:55:09 crc kubenswrapper[4624]: I1008 14:55:09.812009 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57"] Oct 08 14:55:09 crc kubenswrapper[4624]: I1008 14:55:09.852113 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57" event={"ID":"43155da2-e389-481e-8e9d-8c219482ba50","Type":"ContainerStarted","Data":"0bb6387f1c57da33dfd23465146bb90b50be3c4fde95fbbc6080d416e3b95289"} Oct 08 14:55:10 crc kubenswrapper[4624]: I1008 14:55:10.870710 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57" event={"ID":"43155da2-e389-481e-8e9d-8c219482ba50","Type":"ContainerStarted","Data":"01482e45d0ba430237bc92518224786beb5d59cc41c86f1fde0ae4d04a67500d"} Oct 08 14:55:10 crc kubenswrapper[4624]: I1008 14:55:10.894955 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57" podStartSLOduration=2.517211178 podStartE2EDuration="2.894934605s" podCreationTimestamp="2025-10-08 14:55:08 +0000 UTC" firstStartedPulling="2025-10-08 14:55:09.815520225 +0000 UTC m=+1934.966455302" lastFinishedPulling="2025-10-08 14:55:10.193243652 +0000 UTC m=+1935.344178729" observedRunningTime="2025-10-08 14:55:10.889986053 +0000 UTC m=+1936.040921231" watchObservedRunningTime="2025-10-08 14:55:10.894934605 +0000 UTC m=+1936.045869682" Oct 08 14:55:20 crc kubenswrapper[4624]: I1008 14:55:20.039883 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-r2jkt"] Oct 08 14:55:20 crc kubenswrapper[4624]: I1008 14:55:20.047376 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-r2jkt"] Oct 08 14:55:21 crc kubenswrapper[4624]: I1008 14:55:21.476863 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e373c0bb-06d9-45b9-ac73-eb55282374f4" path="/var/lib/kubelet/pods/e373c0bb-06d9-45b9-ac73-eb55282374f4/volumes" Oct 08 14:55:41 crc kubenswrapper[4624]: I1008 14:55:41.641249 4624 scope.go:117] "RemoveContainer" containerID="d32422146bc184c711eeaf0da36e9227381abf110459af28f1479dc7033e276a" Oct 08 14:55:41 crc kubenswrapper[4624]: I1008 14:55:41.690758 4624 scope.go:117] "RemoveContainer" containerID="b4a70bc451a76630b0c6fe0461307069449dc7dcd8f6128bb1b83fc8dfa501b7" Oct 08 14:55:50 crc kubenswrapper[4624]: I1008 14:55:50.197079 4624 generic.go:334] "Generic (PLEG): container finished" podID="43155da2-e389-481e-8e9d-8c219482ba50" containerID="01482e45d0ba430237bc92518224786beb5d59cc41c86f1fde0ae4d04a67500d" exitCode=0 Oct 08 14:55:50 crc kubenswrapper[4624]: I1008 14:55:50.197173 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57" event={"ID":"43155da2-e389-481e-8e9d-8c219482ba50","Type":"ContainerDied","Data":"01482e45d0ba430237bc92518224786beb5d59cc41c86f1fde0ae4d04a67500d"} Oct 08 14:55:51 crc kubenswrapper[4624]: I1008 14:55:51.644677 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57" Oct 08 14:55:51 crc kubenswrapper[4624]: I1008 14:55:51.770742 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn75g\" (UniqueName: \"kubernetes.io/projected/43155da2-e389-481e-8e9d-8c219482ba50-kube-api-access-hn75g\") pod \"43155da2-e389-481e-8e9d-8c219482ba50\" (UID: \"43155da2-e389-481e-8e9d-8c219482ba50\") " Oct 08 14:55:51 crc kubenswrapper[4624]: I1008 14:55:51.770890 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43155da2-e389-481e-8e9d-8c219482ba50-inventory\") pod \"43155da2-e389-481e-8e9d-8c219482ba50\" (UID: \"43155da2-e389-481e-8e9d-8c219482ba50\") " Oct 08 14:55:51 crc kubenswrapper[4624]: I1008 14:55:51.770945 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43155da2-e389-481e-8e9d-8c219482ba50-ssh-key\") pod \"43155da2-e389-481e-8e9d-8c219482ba50\" (UID: \"43155da2-e389-481e-8e9d-8c219482ba50\") " Oct 08 14:55:51 crc kubenswrapper[4624]: I1008 14:55:51.784879 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43155da2-e389-481e-8e9d-8c219482ba50-kube-api-access-hn75g" (OuterVolumeSpecName: "kube-api-access-hn75g") pod "43155da2-e389-481e-8e9d-8c219482ba50" (UID: "43155da2-e389-481e-8e9d-8c219482ba50"). InnerVolumeSpecName "kube-api-access-hn75g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:55:51 crc kubenswrapper[4624]: I1008 14:55:51.802882 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43155da2-e389-481e-8e9d-8c219482ba50-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "43155da2-e389-481e-8e9d-8c219482ba50" (UID: "43155da2-e389-481e-8e9d-8c219482ba50"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:55:51 crc kubenswrapper[4624]: I1008 14:55:51.810108 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43155da2-e389-481e-8e9d-8c219482ba50-inventory" (OuterVolumeSpecName: "inventory") pod "43155da2-e389-481e-8e9d-8c219482ba50" (UID: "43155da2-e389-481e-8e9d-8c219482ba50"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:55:51 crc kubenswrapper[4624]: I1008 14:55:51.873280 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn75g\" (UniqueName: \"kubernetes.io/projected/43155da2-e389-481e-8e9d-8c219482ba50-kube-api-access-hn75g\") on node \"crc\" DevicePath \"\"" Oct 08 14:55:51 crc kubenswrapper[4624]: I1008 14:55:51.873333 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43155da2-e389-481e-8e9d-8c219482ba50-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 14:55:51 crc kubenswrapper[4624]: I1008 14:55:51.873350 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43155da2-e389-481e-8e9d-8c219482ba50-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.215442 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57" event={"ID":"43155da2-e389-481e-8e9d-8c219482ba50","Type":"ContainerDied","Data":"0bb6387f1c57da33dfd23465146bb90b50be3c4fde95fbbc6080d416e3b95289"} Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.215488 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bb6387f1c57da33dfd23465146bb90b50be3c4fde95fbbc6080d416e3b95289" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.215519 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fmk57" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.295653 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7"] Oct 08 14:55:52 crc kubenswrapper[4624]: E1008 14:55:52.296129 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43155da2-e389-481e-8e9d-8c219482ba50" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.296153 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="43155da2-e389-481e-8e9d-8c219482ba50" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.296328 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="43155da2-e389-481e-8e9d-8c219482ba50" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.297135 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.298727 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.299225 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.299810 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v6sxt" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.303731 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.311169 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7"] Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.382534 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b7qs\" (UniqueName: \"kubernetes.io/projected/5f5a0d83-3c47-439d-9d82-12e0c8afdf45-kube-api-access-6b7qs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f47x7\" (UID: \"5f5a0d83-3c47-439d-9d82-12e0c8afdf45\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.382807 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f5a0d83-3c47-439d-9d82-12e0c8afdf45-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f47x7\" (UID: \"5f5a0d83-3c47-439d-9d82-12e0c8afdf45\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.383240 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f5a0d83-3c47-439d-9d82-12e0c8afdf45-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f47x7\" (UID: \"5f5a0d83-3c47-439d-9d82-12e0c8afdf45\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.485452 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f5a0d83-3c47-439d-9d82-12e0c8afdf45-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f47x7\" (UID: \"5f5a0d83-3c47-439d-9d82-12e0c8afdf45\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.485595 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f5a0d83-3c47-439d-9d82-12e0c8afdf45-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f47x7\" (UID: \"5f5a0d83-3c47-439d-9d82-12e0c8afdf45\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.485675 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b7qs\" (UniqueName: \"kubernetes.io/projected/5f5a0d83-3c47-439d-9d82-12e0c8afdf45-kube-api-access-6b7qs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f47x7\" (UID: \"5f5a0d83-3c47-439d-9d82-12e0c8afdf45\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.490731 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f5a0d83-3c47-439d-9d82-12e0c8afdf45-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f47x7\" (UID: \"5f5a0d83-3c47-439d-9d82-12e0c8afdf45\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.491189 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f5a0d83-3c47-439d-9d82-12e0c8afdf45-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f47x7\" (UID: \"5f5a0d83-3c47-439d-9d82-12e0c8afdf45\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.514121 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b7qs\" (UniqueName: \"kubernetes.io/projected/5f5a0d83-3c47-439d-9d82-12e0c8afdf45-kube-api-access-6b7qs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f47x7\" (UID: \"5f5a0d83-3c47-439d-9d82-12e0c8afdf45\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7" Oct 08 14:55:52 crc kubenswrapper[4624]: I1008 14:55:52.614750 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7" Oct 08 14:55:53 crc kubenswrapper[4624]: I1008 14:55:53.144147 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7"] Oct 08 14:55:53 crc kubenswrapper[4624]: W1008 14:55:53.153768 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f5a0d83_3c47_439d_9d82_12e0c8afdf45.slice/crio-0b388b5bd94b399d96c4d760bba095112ef965d77aa8be5db2226206ae6deef7 WatchSource:0}: Error finding container 0b388b5bd94b399d96c4d760bba095112ef965d77aa8be5db2226206ae6deef7: Status 404 returned error can't find the container with id 0b388b5bd94b399d96c4d760bba095112ef965d77aa8be5db2226206ae6deef7 Oct 08 14:55:53 crc kubenswrapper[4624]: I1008 14:55:53.227310 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7" event={"ID":"5f5a0d83-3c47-439d-9d82-12e0c8afdf45","Type":"ContainerStarted","Data":"0b388b5bd94b399d96c4d760bba095112ef965d77aa8be5db2226206ae6deef7"} Oct 08 14:55:54 crc kubenswrapper[4624]: I1008 14:55:54.238449 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7" event={"ID":"5f5a0d83-3c47-439d-9d82-12e0c8afdf45","Type":"ContainerStarted","Data":"efdbbb7e582cd57c1d9f9412e582813c3c0a3819bbfac406ba26d6af40410c1f"} Oct 08 14:55:54 crc kubenswrapper[4624]: I1008 14:55:54.257652 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7" podStartSLOduration=1.7934934 podStartE2EDuration="2.257610107s" podCreationTimestamp="2025-10-08 14:55:52 +0000 UTC" firstStartedPulling="2025-10-08 14:55:53.156333011 +0000 UTC m=+1978.307268088" lastFinishedPulling="2025-10-08 14:55:53.620449728 +0000 UTC m=+1978.771384795" observedRunningTime="2025-10-08 14:55:54.255576255 +0000 UTC m=+1979.406511352" watchObservedRunningTime="2025-10-08 14:55:54.257610107 +0000 UTC m=+1979.408545184" Oct 08 14:56:30 crc kubenswrapper[4624]: I1008 14:56:30.076573 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:56:30 crc kubenswrapper[4624]: I1008 14:56:30.077289 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:56:50 crc kubenswrapper[4624]: I1008 14:56:50.725697 4624 generic.go:334] "Generic (PLEG): container finished" podID="5f5a0d83-3c47-439d-9d82-12e0c8afdf45" containerID="efdbbb7e582cd57c1d9f9412e582813c3c0a3819bbfac406ba26d6af40410c1f" exitCode=2 Oct 08 14:56:50 crc kubenswrapper[4624]: I1008 14:56:50.725807 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7" event={"ID":"5f5a0d83-3c47-439d-9d82-12e0c8afdf45","Type":"ContainerDied","Data":"efdbbb7e582cd57c1d9f9412e582813c3c0a3819bbfac406ba26d6af40410c1f"} Oct 08 14:56:52 crc kubenswrapper[4624]: I1008 14:56:52.189481 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7" Oct 08 14:56:52 crc kubenswrapper[4624]: I1008 14:56:52.305222 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f5a0d83-3c47-439d-9d82-12e0c8afdf45-inventory\") pod \"5f5a0d83-3c47-439d-9d82-12e0c8afdf45\" (UID: \"5f5a0d83-3c47-439d-9d82-12e0c8afdf45\") " Oct 08 14:56:52 crc kubenswrapper[4624]: I1008 14:56:52.305722 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b7qs\" (UniqueName: \"kubernetes.io/projected/5f5a0d83-3c47-439d-9d82-12e0c8afdf45-kube-api-access-6b7qs\") pod \"5f5a0d83-3c47-439d-9d82-12e0c8afdf45\" (UID: \"5f5a0d83-3c47-439d-9d82-12e0c8afdf45\") " Oct 08 14:56:52 crc kubenswrapper[4624]: I1008 14:56:52.306937 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f5a0d83-3c47-439d-9d82-12e0c8afdf45-ssh-key\") pod \"5f5a0d83-3c47-439d-9d82-12e0c8afdf45\" (UID: \"5f5a0d83-3c47-439d-9d82-12e0c8afdf45\") " Oct 08 14:56:52 crc kubenswrapper[4624]: I1008 14:56:52.312872 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5a0d83-3c47-439d-9d82-12e0c8afdf45-kube-api-access-6b7qs" (OuterVolumeSpecName: "kube-api-access-6b7qs") pod "5f5a0d83-3c47-439d-9d82-12e0c8afdf45" (UID: "5f5a0d83-3c47-439d-9d82-12e0c8afdf45"). InnerVolumeSpecName "kube-api-access-6b7qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:56:52 crc kubenswrapper[4624]: I1008 14:56:52.336118 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f5a0d83-3c47-439d-9d82-12e0c8afdf45-inventory" (OuterVolumeSpecName: "inventory") pod "5f5a0d83-3c47-439d-9d82-12e0c8afdf45" (UID: "5f5a0d83-3c47-439d-9d82-12e0c8afdf45"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:56:52 crc kubenswrapper[4624]: I1008 14:56:52.337334 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f5a0d83-3c47-439d-9d82-12e0c8afdf45-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5f5a0d83-3c47-439d-9d82-12e0c8afdf45" (UID: "5f5a0d83-3c47-439d-9d82-12e0c8afdf45"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:56:52 crc kubenswrapper[4624]: I1008 14:56:52.410287 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f5a0d83-3c47-439d-9d82-12e0c8afdf45-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 14:56:52 crc kubenswrapper[4624]: I1008 14:56:52.410334 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b7qs\" (UniqueName: \"kubernetes.io/projected/5f5a0d83-3c47-439d-9d82-12e0c8afdf45-kube-api-access-6b7qs\") on node \"crc\" DevicePath \"\"" Oct 08 14:56:52 crc kubenswrapper[4624]: I1008 14:56:52.410350 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f5a0d83-3c47-439d-9d82-12e0c8afdf45-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 14:56:52 crc kubenswrapper[4624]: I1008 14:56:52.752973 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7" event={"ID":"5f5a0d83-3c47-439d-9d82-12e0c8afdf45","Type":"ContainerDied","Data":"0b388b5bd94b399d96c4d760bba095112ef965d77aa8be5db2226206ae6deef7"} Oct 08 14:56:52 crc kubenswrapper[4624]: I1008 14:56:52.753021 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b388b5bd94b399d96c4d760bba095112ef965d77aa8be5db2226206ae6deef7" Oct 08 14:56:52 crc kubenswrapper[4624]: I1008 14:56:52.753092 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f47x7" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.032307 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp"] Oct 08 14:57:00 crc kubenswrapper[4624]: E1008 14:57:00.033203 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5a0d83-3c47-439d-9d82-12e0c8afdf45" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.033216 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5a0d83-3c47-439d-9d82-12e0c8afdf45" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.033402 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f5a0d83-3c47-439d-9d82-12e0c8afdf45" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.034060 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.036499 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v6sxt" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.036825 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.037005 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.046874 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp"] Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.047626 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.076111 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.076416 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.158671 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp\" (UID: \"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.159031 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkn4r\" (UniqueName: \"kubernetes.io/projected/b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64-kube-api-access-mkn4r\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp\" (UID: \"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.159357 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp\" (UID: \"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.261600 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp\" (UID: \"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.262222 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkn4r\" (UniqueName: \"kubernetes.io/projected/b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64-kube-api-access-mkn4r\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp\" (UID: \"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.262436 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp\" (UID: \"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.269657 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp\" (UID: \"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.274178 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp\" (UID: \"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.281016 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkn4r\" (UniqueName: \"kubernetes.io/projected/b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64-kube-api-access-mkn4r\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp\" (UID: \"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.367227 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp" Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.915699 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp"] Oct 08 14:57:00 crc kubenswrapper[4624]: I1008 14:57:00.946596 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 14:57:01 crc kubenswrapper[4624]: I1008 14:57:01.830961 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp" event={"ID":"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64","Type":"ContainerStarted","Data":"270360bf07006964487bf89c9a13f53f74dd53f31302da929c2998dd5372a363"} Oct 08 14:57:01 crc kubenswrapper[4624]: I1008 14:57:01.831304 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp" event={"ID":"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64","Type":"ContainerStarted","Data":"968ee4a7246cf52a444754af8b6c91798123cf930beee17e3cdb69c1c4a5f1df"} Oct 08 14:57:01 crc kubenswrapper[4624]: I1008 14:57:01.851973 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp" podStartSLOduration=1.379682296 podStartE2EDuration="1.851903122s" podCreationTimestamp="2025-10-08 14:57:00 +0000 UTC" firstStartedPulling="2025-10-08 14:57:00.945598461 +0000 UTC m=+2046.096533538" lastFinishedPulling="2025-10-08 14:57:01.417819287 +0000 UTC m=+2046.568754364" observedRunningTime="2025-10-08 14:57:01.85077094 +0000 UTC m=+2047.001706017" watchObservedRunningTime="2025-10-08 14:57:01.851903122 +0000 UTC m=+2047.002838189" Oct 08 14:57:30 crc kubenswrapper[4624]: I1008 14:57:30.076082 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:57:30 crc kubenswrapper[4624]: I1008 14:57:30.076607 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 14:57:30 crc kubenswrapper[4624]: I1008 14:57:30.076681 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 14:57:30 crc kubenswrapper[4624]: I1008 14:57:30.077485 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ea1ca72b798c395b4e373d87fbab7799599872cd0c0147a7af45b5d45ba60f8"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 14:57:30 crc kubenswrapper[4624]: I1008 14:57:30.077547 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://5ea1ca72b798c395b4e373d87fbab7799599872cd0c0147a7af45b5d45ba60f8" gracePeriod=600 Oct 08 14:57:31 crc kubenswrapper[4624]: I1008 14:57:31.063658 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="5ea1ca72b798c395b4e373d87fbab7799599872cd0c0147a7af45b5d45ba60f8" exitCode=0 Oct 08 14:57:31 crc kubenswrapper[4624]: I1008 14:57:31.063669 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"5ea1ca72b798c395b4e373d87fbab7799599872cd0c0147a7af45b5d45ba60f8"} Oct 08 14:57:31 crc kubenswrapper[4624]: I1008 14:57:31.064031 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927"} Oct 08 14:57:31 crc kubenswrapper[4624]: I1008 14:57:31.064051 4624 scope.go:117] "RemoveContainer" containerID="24b367fa12764e9bb3f1cf51cae821e47200b3b40ca60d3589abcdcb7a63e88c" Oct 08 14:57:50 crc kubenswrapper[4624]: I1008 14:57:50.220219 4624 generic.go:334] "Generic (PLEG): container finished" podID="b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64" containerID="270360bf07006964487bf89c9a13f53f74dd53f31302da929c2998dd5372a363" exitCode=0 Oct 08 14:57:50 crc kubenswrapper[4624]: I1008 14:57:50.221871 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp" event={"ID":"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64","Type":"ContainerDied","Data":"270360bf07006964487bf89c9a13f53f74dd53f31302da929c2998dd5372a363"} Oct 08 14:57:50 crc kubenswrapper[4624]: I1008 14:57:50.959628 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x7bg2"] Oct 08 14:57:50 crc kubenswrapper[4624]: I1008 14:57:50.962504 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7bg2" Oct 08 14:57:50 crc kubenswrapper[4624]: I1008 14:57:50.982078 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7bg2"] Oct 08 14:57:51 crc kubenswrapper[4624]: I1008 14:57:51.033353 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc06c149-97c4-4f52-aa4c-915deca6cbc4-catalog-content\") pod \"community-operators-x7bg2\" (UID: \"dc06c149-97c4-4f52-aa4c-915deca6cbc4\") " pod="openshift-marketplace/community-operators-x7bg2" Oct 08 14:57:51 crc kubenswrapper[4624]: I1008 14:57:51.033491 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc06c149-97c4-4f52-aa4c-915deca6cbc4-utilities\") pod \"community-operators-x7bg2\" (UID: \"dc06c149-97c4-4f52-aa4c-915deca6cbc4\") " pod="openshift-marketplace/community-operators-x7bg2" Oct 08 14:57:51 crc kubenswrapper[4624]: I1008 14:57:51.033553 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr6c8\" (UniqueName: \"kubernetes.io/projected/dc06c149-97c4-4f52-aa4c-915deca6cbc4-kube-api-access-cr6c8\") pod \"community-operators-x7bg2\" (UID: \"dc06c149-97c4-4f52-aa4c-915deca6cbc4\") " pod="openshift-marketplace/community-operators-x7bg2" Oct 08 14:57:51 crc kubenswrapper[4624]: I1008 14:57:51.135964 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc06c149-97c4-4f52-aa4c-915deca6cbc4-catalog-content\") pod \"community-operators-x7bg2\" (UID: \"dc06c149-97c4-4f52-aa4c-915deca6cbc4\") " pod="openshift-marketplace/community-operators-x7bg2" Oct 08 14:57:51 crc kubenswrapper[4624]: I1008 14:57:51.136087 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc06c149-97c4-4f52-aa4c-915deca6cbc4-utilities\") pod \"community-operators-x7bg2\" (UID: \"dc06c149-97c4-4f52-aa4c-915deca6cbc4\") " pod="openshift-marketplace/community-operators-x7bg2" Oct 08 14:57:51 crc kubenswrapper[4624]: I1008 14:57:51.136172 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr6c8\" (UniqueName: \"kubernetes.io/projected/dc06c149-97c4-4f52-aa4c-915deca6cbc4-kube-api-access-cr6c8\") pod \"community-operators-x7bg2\" (UID: \"dc06c149-97c4-4f52-aa4c-915deca6cbc4\") " pod="openshift-marketplace/community-operators-x7bg2" Oct 08 14:57:51 crc kubenswrapper[4624]: I1008 14:57:51.136630 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc06c149-97c4-4f52-aa4c-915deca6cbc4-catalog-content\") pod \"community-operators-x7bg2\" (UID: \"dc06c149-97c4-4f52-aa4c-915deca6cbc4\") " pod="openshift-marketplace/community-operators-x7bg2" Oct 08 14:57:51 crc kubenswrapper[4624]: I1008 14:57:51.136726 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc06c149-97c4-4f52-aa4c-915deca6cbc4-utilities\") pod \"community-operators-x7bg2\" (UID: \"dc06c149-97c4-4f52-aa4c-915deca6cbc4\") " pod="openshift-marketplace/community-operators-x7bg2" Oct 08 14:57:51 crc kubenswrapper[4624]: I1008 14:57:51.155218 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr6c8\" (UniqueName: \"kubernetes.io/projected/dc06c149-97c4-4f52-aa4c-915deca6cbc4-kube-api-access-cr6c8\") pod \"community-operators-x7bg2\" (UID: \"dc06c149-97c4-4f52-aa4c-915deca6cbc4\") " pod="openshift-marketplace/community-operators-x7bg2" Oct 08 14:57:51 crc kubenswrapper[4624]: I1008 14:57:51.287181 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7bg2" Oct 08 14:57:51 crc kubenswrapper[4624]: I1008 14:57:51.863594 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp" Oct 08 14:57:51 crc kubenswrapper[4624]: I1008 14:57:51.951970 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkn4r\" (UniqueName: \"kubernetes.io/projected/b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64-kube-api-access-mkn4r\") pod \"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64\" (UID: \"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64\") " Oct 08 14:57:51 crc kubenswrapper[4624]: I1008 14:57:51.952579 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64-ssh-key\") pod \"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64\" (UID: \"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64\") " Oct 08 14:57:51 crc kubenswrapper[4624]: I1008 14:57:51.952617 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64-inventory\") pod \"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64\" (UID: \"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64\") " Oct 08 14:57:51 crc kubenswrapper[4624]: I1008 14:57:51.957963 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64-kube-api-access-mkn4r" (OuterVolumeSpecName: "kube-api-access-mkn4r") pod "b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64" (UID: "b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64"). InnerVolumeSpecName "kube-api-access-mkn4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:57:51 crc kubenswrapper[4624]: I1008 14:57:51.975480 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7bg2"] Oct 08 14:57:51 crc kubenswrapper[4624]: W1008 14:57:51.978422 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc06c149_97c4_4f52_aa4c_915deca6cbc4.slice/crio-9386af9540fd9cfa6674082fd33e4a009bdc122e78c3fabf04c148737497274c WatchSource:0}: Error finding container 9386af9540fd9cfa6674082fd33e4a009bdc122e78c3fabf04c148737497274c: Status 404 returned error can't find the container with id 9386af9540fd9cfa6674082fd33e4a009bdc122e78c3fabf04c148737497274c Oct 08 14:57:51 crc kubenswrapper[4624]: I1008 14:57:51.983697 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64-inventory" (OuterVolumeSpecName: "inventory") pod "b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64" (UID: "b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:57:51 crc kubenswrapper[4624]: I1008 14:57:51.989286 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64" (UID: "b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.054360 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkn4r\" (UniqueName: \"kubernetes.io/projected/b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64-kube-api-access-mkn4r\") on node \"crc\" DevicePath \"\"" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.054388 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.054397 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.239782 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.239704 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp" event={"ID":"b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64","Type":"ContainerDied","Data":"968ee4a7246cf52a444754af8b6c91798123cf930beee17e3cdb69c1c4a5f1df"} Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.240108 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="968ee4a7246cf52a444754af8b6c91798123cf930beee17e3cdb69c1c4a5f1df" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.241946 4624 generic.go:334] "Generic (PLEG): container finished" podID="dc06c149-97c4-4f52-aa4c-915deca6cbc4" containerID="36d673c643da19144ae7ed9d0b2a2a898864a969a0a357b5fb5692bc621d8e1e" exitCode=0 Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.241985 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7bg2" event={"ID":"dc06c149-97c4-4f52-aa4c-915deca6cbc4","Type":"ContainerDied","Data":"36d673c643da19144ae7ed9d0b2a2a898864a969a0a357b5fb5692bc621d8e1e"} Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.242007 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7bg2" event={"ID":"dc06c149-97c4-4f52-aa4c-915deca6cbc4","Type":"ContainerStarted","Data":"9386af9540fd9cfa6674082fd33e4a009bdc122e78c3fabf04c148737497274c"} Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.343346 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-n95kc"] Oct 08 14:57:52 crc kubenswrapper[4624]: E1008 14:57:52.344691 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.344714 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.345001 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.345638 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-n95kc" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.350963 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v6sxt" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.351182 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.351349 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.351533 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.364226 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-n95kc"] Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.463512 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/054c88fe-e5ae-4274-b497-a3e583b40594-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-n95kc\" (UID: \"054c88fe-e5ae-4274-b497-a3e583b40594\") " pod="openstack/ssh-known-hosts-edpm-deployment-n95kc" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.464091 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpkbb\" (UniqueName: \"kubernetes.io/projected/054c88fe-e5ae-4274-b497-a3e583b40594-kube-api-access-cpkbb\") pod \"ssh-known-hosts-edpm-deployment-n95kc\" (UID: \"054c88fe-e5ae-4274-b497-a3e583b40594\") " pod="openstack/ssh-known-hosts-edpm-deployment-n95kc" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.464120 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/054c88fe-e5ae-4274-b497-a3e583b40594-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-n95kc\" (UID: \"054c88fe-e5ae-4274-b497-a3e583b40594\") " pod="openstack/ssh-known-hosts-edpm-deployment-n95kc" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.566397 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpkbb\" (UniqueName: \"kubernetes.io/projected/054c88fe-e5ae-4274-b497-a3e583b40594-kube-api-access-cpkbb\") pod \"ssh-known-hosts-edpm-deployment-n95kc\" (UID: \"054c88fe-e5ae-4274-b497-a3e583b40594\") " pod="openstack/ssh-known-hosts-edpm-deployment-n95kc" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.566465 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/054c88fe-e5ae-4274-b497-a3e583b40594-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-n95kc\" (UID: \"054c88fe-e5ae-4274-b497-a3e583b40594\") " pod="openstack/ssh-known-hosts-edpm-deployment-n95kc" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.566618 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/054c88fe-e5ae-4274-b497-a3e583b40594-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-n95kc\" (UID: \"054c88fe-e5ae-4274-b497-a3e583b40594\") " pod="openstack/ssh-known-hosts-edpm-deployment-n95kc" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.574986 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/054c88fe-e5ae-4274-b497-a3e583b40594-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-n95kc\" (UID: \"054c88fe-e5ae-4274-b497-a3e583b40594\") " pod="openstack/ssh-known-hosts-edpm-deployment-n95kc" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.575187 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/054c88fe-e5ae-4274-b497-a3e583b40594-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-n95kc\" (UID: \"054c88fe-e5ae-4274-b497-a3e583b40594\") " pod="openstack/ssh-known-hosts-edpm-deployment-n95kc" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.585044 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpkbb\" (UniqueName: \"kubernetes.io/projected/054c88fe-e5ae-4274-b497-a3e583b40594-kube-api-access-cpkbb\") pod \"ssh-known-hosts-edpm-deployment-n95kc\" (UID: \"054c88fe-e5ae-4274-b497-a3e583b40594\") " pod="openstack/ssh-known-hosts-edpm-deployment-n95kc" Oct 08 14:57:52 crc kubenswrapper[4624]: I1008 14:57:52.667785 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-n95kc" Oct 08 14:57:53 crc kubenswrapper[4624]: W1008 14:57:53.221555 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod054c88fe_e5ae_4274_b497_a3e583b40594.slice/crio-10a6072f7d13b688518419b745ec10725f3e8cd2f082469749d101131b386670 WatchSource:0}: Error finding container 10a6072f7d13b688518419b745ec10725f3e8cd2f082469749d101131b386670: Status 404 returned error can't find the container with id 10a6072f7d13b688518419b745ec10725f3e8cd2f082469749d101131b386670 Oct 08 14:57:53 crc kubenswrapper[4624]: I1008 14:57:53.222771 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-n95kc"] Oct 08 14:57:53 crc kubenswrapper[4624]: I1008 14:57:53.252149 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-n95kc" event={"ID":"054c88fe-e5ae-4274-b497-a3e583b40594","Type":"ContainerStarted","Data":"10a6072f7d13b688518419b745ec10725f3e8cd2f082469749d101131b386670"} Oct 08 14:57:53 crc kubenswrapper[4624]: I1008 14:57:53.254649 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7bg2" event={"ID":"dc06c149-97c4-4f52-aa4c-915deca6cbc4","Type":"ContainerStarted","Data":"bcb3438803973d6e9a22e4a1c4863795224eab3af8a6e0f87dc97a21425e9463"} Oct 08 14:57:54 crc kubenswrapper[4624]: I1008 14:57:54.275746 4624 generic.go:334] "Generic (PLEG): container finished" podID="dc06c149-97c4-4f52-aa4c-915deca6cbc4" containerID="bcb3438803973d6e9a22e4a1c4863795224eab3af8a6e0f87dc97a21425e9463" exitCode=0 Oct 08 14:57:54 crc kubenswrapper[4624]: I1008 14:57:54.275859 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7bg2" event={"ID":"dc06c149-97c4-4f52-aa4c-915deca6cbc4","Type":"ContainerDied","Data":"bcb3438803973d6e9a22e4a1c4863795224eab3af8a6e0f87dc97a21425e9463"} Oct 08 14:57:54 crc kubenswrapper[4624]: I1008 14:57:54.280886 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-n95kc" event={"ID":"054c88fe-e5ae-4274-b497-a3e583b40594","Type":"ContainerStarted","Data":"8301d31e95fded08f82b5cb1a74b96a605eb6b608c058b50147bc44d5df84731"} Oct 08 14:57:54 crc kubenswrapper[4624]: I1008 14:57:54.322158 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-n95kc" podStartSLOduration=1.786953553 podStartE2EDuration="2.322134852s" podCreationTimestamp="2025-10-08 14:57:52 +0000 UTC" firstStartedPulling="2025-10-08 14:57:53.224154805 +0000 UTC m=+2098.375089882" lastFinishedPulling="2025-10-08 14:57:53.759336104 +0000 UTC m=+2098.910271181" observedRunningTime="2025-10-08 14:57:54.315154988 +0000 UTC m=+2099.466090065" watchObservedRunningTime="2025-10-08 14:57:54.322134852 +0000 UTC m=+2099.473069929" Oct 08 14:57:55 crc kubenswrapper[4624]: I1008 14:57:55.303015 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7bg2" event={"ID":"dc06c149-97c4-4f52-aa4c-915deca6cbc4","Type":"ContainerStarted","Data":"4a61694a8c25a11d4d5ae240c7c071e3323dddadb49b60b1843e5cf88d7a5151"} Oct 08 14:57:55 crc kubenswrapper[4624]: I1008 14:57:55.323730 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x7bg2" podStartSLOduration=2.905188951 podStartE2EDuration="5.323710751s" podCreationTimestamp="2025-10-08 14:57:50 +0000 UTC" firstStartedPulling="2025-10-08 14:57:52.243848675 +0000 UTC m=+2097.394783752" lastFinishedPulling="2025-10-08 14:57:54.662370475 +0000 UTC m=+2099.813305552" observedRunningTime="2025-10-08 14:57:55.320239829 +0000 UTC m=+2100.471174906" watchObservedRunningTime="2025-10-08 14:57:55.323710751 +0000 UTC m=+2100.474645838" Oct 08 14:58:01 crc kubenswrapper[4624]: I1008 14:58:01.288257 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x7bg2" Oct 08 14:58:01 crc kubenswrapper[4624]: I1008 14:58:01.288894 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x7bg2" Oct 08 14:58:01 crc kubenswrapper[4624]: I1008 14:58:01.344000 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x7bg2" Oct 08 14:58:01 crc kubenswrapper[4624]: I1008 14:58:01.355129 4624 generic.go:334] "Generic (PLEG): container finished" podID="054c88fe-e5ae-4274-b497-a3e583b40594" containerID="8301d31e95fded08f82b5cb1a74b96a605eb6b608c058b50147bc44d5df84731" exitCode=0 Oct 08 14:58:01 crc kubenswrapper[4624]: I1008 14:58:01.356011 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-n95kc" event={"ID":"054c88fe-e5ae-4274-b497-a3e583b40594","Type":"ContainerDied","Data":"8301d31e95fded08f82b5cb1a74b96a605eb6b608c058b50147bc44d5df84731"} Oct 08 14:58:01 crc kubenswrapper[4624]: I1008 14:58:01.413713 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x7bg2" Oct 08 14:58:01 crc kubenswrapper[4624]: I1008 14:58:01.584788 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x7bg2"] Oct 08 14:58:02 crc kubenswrapper[4624]: I1008 14:58:02.797981 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-n95kc" Oct 08 14:58:02 crc kubenswrapper[4624]: I1008 14:58:02.887113 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpkbb\" (UniqueName: \"kubernetes.io/projected/054c88fe-e5ae-4274-b497-a3e583b40594-kube-api-access-cpkbb\") pod \"054c88fe-e5ae-4274-b497-a3e583b40594\" (UID: \"054c88fe-e5ae-4274-b497-a3e583b40594\") " Oct 08 14:58:02 crc kubenswrapper[4624]: I1008 14:58:02.887479 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/054c88fe-e5ae-4274-b497-a3e583b40594-ssh-key-openstack-edpm-ipam\") pod \"054c88fe-e5ae-4274-b497-a3e583b40594\" (UID: \"054c88fe-e5ae-4274-b497-a3e583b40594\") " Oct 08 14:58:02 crc kubenswrapper[4624]: I1008 14:58:02.887710 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/054c88fe-e5ae-4274-b497-a3e583b40594-inventory-0\") pod \"054c88fe-e5ae-4274-b497-a3e583b40594\" (UID: \"054c88fe-e5ae-4274-b497-a3e583b40594\") " Oct 08 14:58:02 crc kubenswrapper[4624]: I1008 14:58:02.893116 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/054c88fe-e5ae-4274-b497-a3e583b40594-kube-api-access-cpkbb" (OuterVolumeSpecName: "kube-api-access-cpkbb") pod "054c88fe-e5ae-4274-b497-a3e583b40594" (UID: "054c88fe-e5ae-4274-b497-a3e583b40594"). InnerVolumeSpecName "kube-api-access-cpkbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:58:02 crc kubenswrapper[4624]: I1008 14:58:02.916337 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/054c88fe-e5ae-4274-b497-a3e583b40594-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "054c88fe-e5ae-4274-b497-a3e583b40594" (UID: "054c88fe-e5ae-4274-b497-a3e583b40594"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:58:02 crc kubenswrapper[4624]: I1008 14:58:02.920293 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/054c88fe-e5ae-4274-b497-a3e583b40594-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "054c88fe-e5ae-4274-b497-a3e583b40594" (UID: "054c88fe-e5ae-4274-b497-a3e583b40594"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:58:02 crc kubenswrapper[4624]: I1008 14:58:02.990034 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpkbb\" (UniqueName: \"kubernetes.io/projected/054c88fe-e5ae-4274-b497-a3e583b40594-kube-api-access-cpkbb\") on node \"crc\" DevicePath \"\"" Oct 08 14:58:02 crc kubenswrapper[4624]: I1008 14:58:02.990065 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/054c88fe-e5ae-4274-b497-a3e583b40594-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 08 14:58:02 crc kubenswrapper[4624]: I1008 14:58:02.990077 4624 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/054c88fe-e5ae-4274-b497-a3e583b40594-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.372125 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-n95kc" event={"ID":"054c88fe-e5ae-4274-b497-a3e583b40594","Type":"ContainerDied","Data":"10a6072f7d13b688518419b745ec10725f3e8cd2f082469749d101131b386670"} Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.372188 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10a6072f7d13b688518419b745ec10725f3e8cd2f082469749d101131b386670" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.372155 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-n95kc" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.372308 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x7bg2" podUID="dc06c149-97c4-4f52-aa4c-915deca6cbc4" containerName="registry-server" containerID="cri-o://4a61694a8c25a11d4d5ae240c7c071e3323dddadb49b60b1843e5cf88d7a5151" gracePeriod=2 Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.497215 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k"] Oct 08 14:58:03 crc kubenswrapper[4624]: E1008 14:58:03.498010 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="054c88fe-e5ae-4274-b497-a3e583b40594" containerName="ssh-known-hosts-edpm-deployment" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.498043 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="054c88fe-e5ae-4274-b497-a3e583b40594" containerName="ssh-known-hosts-edpm-deployment" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.498309 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="054c88fe-e5ae-4274-b497-a3e583b40594" containerName="ssh-known-hosts-edpm-deployment" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.501304 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.504124 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.504163 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v6sxt" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.504509 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.505388 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.533513 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k"] Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.600723 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8ml7\" (UniqueName: \"kubernetes.io/projected/e856333d-f205-4d9a-881c-5b3364b5ddb5-kube-api-access-m8ml7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hcl7k\" (UID: \"e856333d-f205-4d9a-881c-5b3364b5ddb5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.600841 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e856333d-f205-4d9a-881c-5b3364b5ddb5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hcl7k\" (UID: \"e856333d-f205-4d9a-881c-5b3364b5ddb5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.600910 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e856333d-f205-4d9a-881c-5b3364b5ddb5-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hcl7k\" (UID: \"e856333d-f205-4d9a-881c-5b3364b5ddb5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.702377 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8ml7\" (UniqueName: \"kubernetes.io/projected/e856333d-f205-4d9a-881c-5b3364b5ddb5-kube-api-access-m8ml7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hcl7k\" (UID: \"e856333d-f205-4d9a-881c-5b3364b5ddb5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.702540 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e856333d-f205-4d9a-881c-5b3364b5ddb5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hcl7k\" (UID: \"e856333d-f205-4d9a-881c-5b3364b5ddb5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.702616 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e856333d-f205-4d9a-881c-5b3364b5ddb5-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hcl7k\" (UID: \"e856333d-f205-4d9a-881c-5b3364b5ddb5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.709652 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e856333d-f205-4d9a-881c-5b3364b5ddb5-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hcl7k\" (UID: \"e856333d-f205-4d9a-881c-5b3364b5ddb5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.717369 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e856333d-f205-4d9a-881c-5b3364b5ddb5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hcl7k\" (UID: \"e856333d-f205-4d9a-881c-5b3364b5ddb5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.728937 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8ml7\" (UniqueName: \"kubernetes.io/projected/e856333d-f205-4d9a-881c-5b3364b5ddb5-kube-api-access-m8ml7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hcl7k\" (UID: \"e856333d-f205-4d9a-881c-5b3364b5ddb5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k" Oct 08 14:58:03 crc kubenswrapper[4624]: I1008 14:58:03.867351 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k" Oct 08 14:58:04 crc kubenswrapper[4624]: I1008 14:58:04.384783 4624 generic.go:334] "Generic (PLEG): container finished" podID="dc06c149-97c4-4f52-aa4c-915deca6cbc4" containerID="4a61694a8c25a11d4d5ae240c7c071e3323dddadb49b60b1843e5cf88d7a5151" exitCode=0 Oct 08 14:58:04 crc kubenswrapper[4624]: I1008 14:58:04.385147 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7bg2" event={"ID":"dc06c149-97c4-4f52-aa4c-915deca6cbc4","Type":"ContainerDied","Data":"4a61694a8c25a11d4d5ae240c7c071e3323dddadb49b60b1843e5cf88d7a5151"} Oct 08 14:58:04 crc kubenswrapper[4624]: I1008 14:58:04.585605 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k"] Oct 08 14:58:04 crc kubenswrapper[4624]: I1008 14:58:04.963647 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7bg2" Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.030699 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc06c149-97c4-4f52-aa4c-915deca6cbc4-utilities\") pod \"dc06c149-97c4-4f52-aa4c-915deca6cbc4\" (UID: \"dc06c149-97c4-4f52-aa4c-915deca6cbc4\") " Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.030846 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr6c8\" (UniqueName: \"kubernetes.io/projected/dc06c149-97c4-4f52-aa4c-915deca6cbc4-kube-api-access-cr6c8\") pod \"dc06c149-97c4-4f52-aa4c-915deca6cbc4\" (UID: \"dc06c149-97c4-4f52-aa4c-915deca6cbc4\") " Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.030930 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc06c149-97c4-4f52-aa4c-915deca6cbc4-catalog-content\") pod \"dc06c149-97c4-4f52-aa4c-915deca6cbc4\" (UID: \"dc06c149-97c4-4f52-aa4c-915deca6cbc4\") " Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.031727 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc06c149-97c4-4f52-aa4c-915deca6cbc4-utilities" (OuterVolumeSpecName: "utilities") pod "dc06c149-97c4-4f52-aa4c-915deca6cbc4" (UID: "dc06c149-97c4-4f52-aa4c-915deca6cbc4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.041805 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc06c149-97c4-4f52-aa4c-915deca6cbc4-kube-api-access-cr6c8" (OuterVolumeSpecName: "kube-api-access-cr6c8") pod "dc06c149-97c4-4f52-aa4c-915deca6cbc4" (UID: "dc06c149-97c4-4f52-aa4c-915deca6cbc4"). InnerVolumeSpecName "kube-api-access-cr6c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.087681 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc06c149-97c4-4f52-aa4c-915deca6cbc4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc06c149-97c4-4f52-aa4c-915deca6cbc4" (UID: "dc06c149-97c4-4f52-aa4c-915deca6cbc4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.133985 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc06c149-97c4-4f52-aa4c-915deca6cbc4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.134018 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc06c149-97c4-4f52-aa4c-915deca6cbc4-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.134050 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr6c8\" (UniqueName: \"kubernetes.io/projected/dc06c149-97c4-4f52-aa4c-915deca6cbc4-kube-api-access-cr6c8\") on node \"crc\" DevicePath \"\"" Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.398223 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k" event={"ID":"e856333d-f205-4d9a-881c-5b3364b5ddb5","Type":"ContainerStarted","Data":"8628dc872f099be3521a07cf9665d38bffe7a7504a2b864c138fcd287fdf3a75"} Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.398284 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k" event={"ID":"e856333d-f205-4d9a-881c-5b3364b5ddb5","Type":"ContainerStarted","Data":"ef5a948389143c1cb66630f7c2c6b85ef1e21e02dc4843718c8848601913d9c9"} Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.402844 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7bg2" event={"ID":"dc06c149-97c4-4f52-aa4c-915deca6cbc4","Type":"ContainerDied","Data":"9386af9540fd9cfa6674082fd33e4a009bdc122e78c3fabf04c148737497274c"} Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.402914 4624 scope.go:117] "RemoveContainer" containerID="4a61694a8c25a11d4d5ae240c7c071e3323dddadb49b60b1843e5cf88d7a5151" Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.402954 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7bg2" Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.429831 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k" podStartSLOduration=1.889411998 podStartE2EDuration="2.42981173s" podCreationTimestamp="2025-10-08 14:58:03 +0000 UTC" firstStartedPulling="2025-10-08 14:58:04.59568545 +0000 UTC m=+2109.746620527" lastFinishedPulling="2025-10-08 14:58:05.136085182 +0000 UTC m=+2110.287020259" observedRunningTime="2025-10-08 14:58:05.41621563 +0000 UTC m=+2110.567150727" watchObservedRunningTime="2025-10-08 14:58:05.42981173 +0000 UTC m=+2110.580746807" Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.432105 4624 scope.go:117] "RemoveContainer" containerID="bcb3438803973d6e9a22e4a1c4863795224eab3af8a6e0f87dc97a21425e9463" Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.448344 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x7bg2"] Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.456272 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x7bg2"] Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.467937 4624 scope.go:117] "RemoveContainer" containerID="36d673c643da19144ae7ed9d0b2a2a898864a969a0a357b5fb5692bc621d8e1e" Oct 08 14:58:05 crc kubenswrapper[4624]: I1008 14:58:05.495659 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc06c149-97c4-4f52-aa4c-915deca6cbc4" path="/var/lib/kubelet/pods/dc06c149-97c4-4f52-aa4c-915deca6cbc4/volumes" Oct 08 14:58:07 crc kubenswrapper[4624]: I1008 14:58:07.596157 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4lktp"] Oct 08 14:58:07 crc kubenswrapper[4624]: E1008 14:58:07.596881 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc06c149-97c4-4f52-aa4c-915deca6cbc4" containerName="registry-server" Oct 08 14:58:07 crc kubenswrapper[4624]: I1008 14:58:07.596894 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc06c149-97c4-4f52-aa4c-915deca6cbc4" containerName="registry-server" Oct 08 14:58:07 crc kubenswrapper[4624]: E1008 14:58:07.596915 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc06c149-97c4-4f52-aa4c-915deca6cbc4" containerName="extract-utilities" Oct 08 14:58:07 crc kubenswrapper[4624]: I1008 14:58:07.596922 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc06c149-97c4-4f52-aa4c-915deca6cbc4" containerName="extract-utilities" Oct 08 14:58:07 crc kubenswrapper[4624]: E1008 14:58:07.596932 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc06c149-97c4-4f52-aa4c-915deca6cbc4" containerName="extract-content" Oct 08 14:58:07 crc kubenswrapper[4624]: I1008 14:58:07.596939 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc06c149-97c4-4f52-aa4c-915deca6cbc4" containerName="extract-content" Oct 08 14:58:07 crc kubenswrapper[4624]: I1008 14:58:07.597150 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc06c149-97c4-4f52-aa4c-915deca6cbc4" containerName="registry-server" Oct 08 14:58:07 crc kubenswrapper[4624]: I1008 14:58:07.598987 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4lktp" Oct 08 14:58:07 crc kubenswrapper[4624]: I1008 14:58:07.616021 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4lktp"] Oct 08 14:58:07 crc kubenswrapper[4624]: I1008 14:58:07.687397 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c0d703-d3e5-4e23-9963-4b79ee6ad080-catalog-content\") pod \"redhat-operators-4lktp\" (UID: \"42c0d703-d3e5-4e23-9963-4b79ee6ad080\") " pod="openshift-marketplace/redhat-operators-4lktp" Oct 08 14:58:07 crc kubenswrapper[4624]: I1008 14:58:07.687925 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c0d703-d3e5-4e23-9963-4b79ee6ad080-utilities\") pod \"redhat-operators-4lktp\" (UID: \"42c0d703-d3e5-4e23-9963-4b79ee6ad080\") " pod="openshift-marketplace/redhat-operators-4lktp" Oct 08 14:58:07 crc kubenswrapper[4624]: I1008 14:58:07.688076 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ndh4\" (UniqueName: \"kubernetes.io/projected/42c0d703-d3e5-4e23-9963-4b79ee6ad080-kube-api-access-2ndh4\") pod \"redhat-operators-4lktp\" (UID: \"42c0d703-d3e5-4e23-9963-4b79ee6ad080\") " pod="openshift-marketplace/redhat-operators-4lktp" Oct 08 14:58:07 crc kubenswrapper[4624]: I1008 14:58:07.789499 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c0d703-d3e5-4e23-9963-4b79ee6ad080-utilities\") pod \"redhat-operators-4lktp\" (UID: \"42c0d703-d3e5-4e23-9963-4b79ee6ad080\") " pod="openshift-marketplace/redhat-operators-4lktp" Oct 08 14:58:07 crc kubenswrapper[4624]: I1008 14:58:07.789589 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ndh4\" (UniqueName: \"kubernetes.io/projected/42c0d703-d3e5-4e23-9963-4b79ee6ad080-kube-api-access-2ndh4\") pod \"redhat-operators-4lktp\" (UID: \"42c0d703-d3e5-4e23-9963-4b79ee6ad080\") " pod="openshift-marketplace/redhat-operators-4lktp" Oct 08 14:58:07 crc kubenswrapper[4624]: I1008 14:58:07.789663 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c0d703-d3e5-4e23-9963-4b79ee6ad080-catalog-content\") pod \"redhat-operators-4lktp\" (UID: \"42c0d703-d3e5-4e23-9963-4b79ee6ad080\") " pod="openshift-marketplace/redhat-operators-4lktp" Oct 08 14:58:07 crc kubenswrapper[4624]: I1008 14:58:07.790287 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c0d703-d3e5-4e23-9963-4b79ee6ad080-catalog-content\") pod \"redhat-operators-4lktp\" (UID: \"42c0d703-d3e5-4e23-9963-4b79ee6ad080\") " pod="openshift-marketplace/redhat-operators-4lktp" Oct 08 14:58:07 crc kubenswrapper[4624]: I1008 14:58:07.790366 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c0d703-d3e5-4e23-9963-4b79ee6ad080-utilities\") pod \"redhat-operators-4lktp\" (UID: \"42c0d703-d3e5-4e23-9963-4b79ee6ad080\") " pod="openshift-marketplace/redhat-operators-4lktp" Oct 08 14:58:07 crc kubenswrapper[4624]: I1008 14:58:07.810071 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ndh4\" (UniqueName: \"kubernetes.io/projected/42c0d703-d3e5-4e23-9963-4b79ee6ad080-kube-api-access-2ndh4\") pod \"redhat-operators-4lktp\" (UID: \"42c0d703-d3e5-4e23-9963-4b79ee6ad080\") " pod="openshift-marketplace/redhat-operators-4lktp" Oct 08 14:58:07 crc kubenswrapper[4624]: I1008 14:58:07.919777 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4lktp" Oct 08 14:58:08 crc kubenswrapper[4624]: I1008 14:58:08.493019 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4lktp"] Oct 08 14:58:09 crc kubenswrapper[4624]: I1008 14:58:09.440300 4624 generic.go:334] "Generic (PLEG): container finished" podID="42c0d703-d3e5-4e23-9963-4b79ee6ad080" containerID="3f90b5127598f6292bf0d284ee94bf86b83d166c04608085320b4865eb4d5ebe" exitCode=0 Oct 08 14:58:09 crc kubenswrapper[4624]: I1008 14:58:09.440423 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4lktp" event={"ID":"42c0d703-d3e5-4e23-9963-4b79ee6ad080","Type":"ContainerDied","Data":"3f90b5127598f6292bf0d284ee94bf86b83d166c04608085320b4865eb4d5ebe"} Oct 08 14:58:09 crc kubenswrapper[4624]: I1008 14:58:09.440943 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4lktp" event={"ID":"42c0d703-d3e5-4e23-9963-4b79ee6ad080","Type":"ContainerStarted","Data":"177b350d6da9d2ddedfa11cd06a9f8e87ff9024127860c947445595d2e7d50a7"} Oct 08 14:58:11 crc kubenswrapper[4624]: I1008 14:58:11.461541 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4lktp" event={"ID":"42c0d703-d3e5-4e23-9963-4b79ee6ad080","Type":"ContainerStarted","Data":"2143e1e7df031aeaf34ed779808576887d464e23f7d19a1d8d9fe6b80805565a"} Oct 08 14:58:15 crc kubenswrapper[4624]: I1008 14:58:15.500978 4624 generic.go:334] "Generic (PLEG): container finished" podID="e856333d-f205-4d9a-881c-5b3364b5ddb5" containerID="8628dc872f099be3521a07cf9665d38bffe7a7504a2b864c138fcd287fdf3a75" exitCode=0 Oct 08 14:58:15 crc kubenswrapper[4624]: I1008 14:58:15.501097 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k" event={"ID":"e856333d-f205-4d9a-881c-5b3364b5ddb5","Type":"ContainerDied","Data":"8628dc872f099be3521a07cf9665d38bffe7a7504a2b864c138fcd287fdf3a75"} Oct 08 14:58:15 crc kubenswrapper[4624]: I1008 14:58:15.504257 4624 generic.go:334] "Generic (PLEG): container finished" podID="42c0d703-d3e5-4e23-9963-4b79ee6ad080" containerID="2143e1e7df031aeaf34ed779808576887d464e23f7d19a1d8d9fe6b80805565a" exitCode=0 Oct 08 14:58:15 crc kubenswrapper[4624]: I1008 14:58:15.504354 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4lktp" event={"ID":"42c0d703-d3e5-4e23-9963-4b79ee6ad080","Type":"ContainerDied","Data":"2143e1e7df031aeaf34ed779808576887d464e23f7d19a1d8d9fe6b80805565a"} Oct 08 14:58:16 crc kubenswrapper[4624]: I1008 14:58:16.531531 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4lktp" event={"ID":"42c0d703-d3e5-4e23-9963-4b79ee6ad080","Type":"ContainerStarted","Data":"66ee5eceb9214230d2c6f1bf90bd4e81125e04d366001782e26f709dca78fbb8"} Oct 08 14:58:16 crc kubenswrapper[4624]: I1008 14:58:16.568294 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4lktp" podStartSLOduration=2.961793673 podStartE2EDuration="9.568274706s" podCreationTimestamp="2025-10-08 14:58:07 +0000 UTC" firstStartedPulling="2025-10-08 14:58:09.444966396 +0000 UTC m=+2114.595901483" lastFinishedPulling="2025-10-08 14:58:16.051447439 +0000 UTC m=+2121.202382516" observedRunningTime="2025-10-08 14:58:16.554743368 +0000 UTC m=+2121.705678465" watchObservedRunningTime="2025-10-08 14:58:16.568274706 +0000 UTC m=+2121.719209783" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.019340 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.116576 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e856333d-f205-4d9a-881c-5b3364b5ddb5-inventory\") pod \"e856333d-f205-4d9a-881c-5b3364b5ddb5\" (UID: \"e856333d-f205-4d9a-881c-5b3364b5ddb5\") " Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.116726 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e856333d-f205-4d9a-881c-5b3364b5ddb5-ssh-key\") pod \"e856333d-f205-4d9a-881c-5b3364b5ddb5\" (UID: \"e856333d-f205-4d9a-881c-5b3364b5ddb5\") " Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.116754 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8ml7\" (UniqueName: \"kubernetes.io/projected/e856333d-f205-4d9a-881c-5b3364b5ddb5-kube-api-access-m8ml7\") pod \"e856333d-f205-4d9a-881c-5b3364b5ddb5\" (UID: \"e856333d-f205-4d9a-881c-5b3364b5ddb5\") " Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.123199 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e856333d-f205-4d9a-881c-5b3364b5ddb5-kube-api-access-m8ml7" (OuterVolumeSpecName: "kube-api-access-m8ml7") pod "e856333d-f205-4d9a-881c-5b3364b5ddb5" (UID: "e856333d-f205-4d9a-881c-5b3364b5ddb5"). InnerVolumeSpecName "kube-api-access-m8ml7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.147216 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e856333d-f205-4d9a-881c-5b3364b5ddb5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e856333d-f205-4d9a-881c-5b3364b5ddb5" (UID: "e856333d-f205-4d9a-881c-5b3364b5ddb5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.148331 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e856333d-f205-4d9a-881c-5b3364b5ddb5-inventory" (OuterVolumeSpecName: "inventory") pod "e856333d-f205-4d9a-881c-5b3364b5ddb5" (UID: "e856333d-f205-4d9a-881c-5b3364b5ddb5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.219156 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e856333d-f205-4d9a-881c-5b3364b5ddb5-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.219192 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e856333d-f205-4d9a-881c-5b3364b5ddb5-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.219203 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8ml7\" (UniqueName: \"kubernetes.io/projected/e856333d-f205-4d9a-881c-5b3364b5ddb5-kube-api-access-m8ml7\") on node \"crc\" DevicePath \"\"" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.543352 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k" event={"ID":"e856333d-f205-4d9a-881c-5b3364b5ddb5","Type":"ContainerDied","Data":"ef5a948389143c1cb66630f7c2c6b85ef1e21e02dc4843718c8848601913d9c9"} Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.544546 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef5a948389143c1cb66630f7c2c6b85ef1e21e02dc4843718c8848601913d9c9" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.543376 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hcl7k" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.652924 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz"] Oct 08 14:58:17 crc kubenswrapper[4624]: E1008 14:58:17.653414 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e856333d-f205-4d9a-881c-5b3364b5ddb5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.653436 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e856333d-f205-4d9a-881c-5b3364b5ddb5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.653609 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e856333d-f205-4d9a-881c-5b3364b5ddb5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.654389 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.660262 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v6sxt" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.660352 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.660666 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.662706 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.684886 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz"] Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.833098 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a215cb1-d735-42d4-9cc9-698fa1a61508-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz\" (UID: \"9a215cb1-d735-42d4-9cc9-698fa1a61508\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.833172 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a215cb1-d735-42d4-9cc9-698fa1a61508-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz\" (UID: \"9a215cb1-d735-42d4-9cc9-698fa1a61508\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.833250 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrfkp\" (UniqueName: \"kubernetes.io/projected/9a215cb1-d735-42d4-9cc9-698fa1a61508-kube-api-access-vrfkp\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz\" (UID: \"9a215cb1-d735-42d4-9cc9-698fa1a61508\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.919968 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4lktp" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.920593 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4lktp" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.935147 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a215cb1-d735-42d4-9cc9-698fa1a61508-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz\" (UID: \"9a215cb1-d735-42d4-9cc9-698fa1a61508\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.935222 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a215cb1-d735-42d4-9cc9-698fa1a61508-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz\" (UID: \"9a215cb1-d735-42d4-9cc9-698fa1a61508\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.935306 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrfkp\" (UniqueName: \"kubernetes.io/projected/9a215cb1-d735-42d4-9cc9-698fa1a61508-kube-api-access-vrfkp\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz\" (UID: \"9a215cb1-d735-42d4-9cc9-698fa1a61508\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.939985 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a215cb1-d735-42d4-9cc9-698fa1a61508-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz\" (UID: \"9a215cb1-d735-42d4-9cc9-698fa1a61508\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.954811 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a215cb1-d735-42d4-9cc9-698fa1a61508-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz\" (UID: \"9a215cb1-d735-42d4-9cc9-698fa1a61508\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.955750 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrfkp\" (UniqueName: \"kubernetes.io/projected/9a215cb1-d735-42d4-9cc9-698fa1a61508-kube-api-access-vrfkp\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz\" (UID: \"9a215cb1-d735-42d4-9cc9-698fa1a61508\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz" Oct 08 14:58:17 crc kubenswrapper[4624]: I1008 14:58:17.979912 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz" Oct 08 14:58:18 crc kubenswrapper[4624]: W1008 14:58:18.593294 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a215cb1_d735_42d4_9cc9_698fa1a61508.slice/crio-db5a8d0ea9acca3a4f9f39b0caaa7870a19bc197a05a362b5742da5c77bfc6a9 WatchSource:0}: Error finding container db5a8d0ea9acca3a4f9f39b0caaa7870a19bc197a05a362b5742da5c77bfc6a9: Status 404 returned error can't find the container with id db5a8d0ea9acca3a4f9f39b0caaa7870a19bc197a05a362b5742da5c77bfc6a9 Oct 08 14:58:18 crc kubenswrapper[4624]: I1008 14:58:18.601507 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz"] Oct 08 14:58:18 crc kubenswrapper[4624]: I1008 14:58:18.971885 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4lktp" podUID="42c0d703-d3e5-4e23-9963-4b79ee6ad080" containerName="registry-server" probeResult="failure" output=< Oct 08 14:58:18 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 14:58:18 crc kubenswrapper[4624]: > Oct 08 14:58:19 crc kubenswrapper[4624]: I1008 14:58:19.583845 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz" event={"ID":"9a215cb1-d735-42d4-9cc9-698fa1a61508","Type":"ContainerStarted","Data":"db5a8d0ea9acca3a4f9f39b0caaa7870a19bc197a05a362b5742da5c77bfc6a9"} Oct 08 14:58:20 crc kubenswrapper[4624]: I1008 14:58:20.597719 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz" event={"ID":"9a215cb1-d735-42d4-9cc9-698fa1a61508","Type":"ContainerStarted","Data":"d33989a7ad46b7b6d17c00724ec988a3ab6b256f999e8506c844ee99c6f44c4e"} Oct 08 14:58:20 crc kubenswrapper[4624]: I1008 14:58:20.622215 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz" podStartSLOduration=2.703327001 podStartE2EDuration="3.622193825s" podCreationTimestamp="2025-10-08 14:58:17 +0000 UTC" firstStartedPulling="2025-10-08 14:58:18.59710136 +0000 UTC m=+2123.748036437" lastFinishedPulling="2025-10-08 14:58:19.515968184 +0000 UTC m=+2124.666903261" observedRunningTime="2025-10-08 14:58:20.616846739 +0000 UTC m=+2125.767781816" watchObservedRunningTime="2025-10-08 14:58:20.622193825 +0000 UTC m=+2125.773128902" Oct 08 14:58:28 crc kubenswrapper[4624]: I1008 14:58:28.965099 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4lktp" podUID="42c0d703-d3e5-4e23-9963-4b79ee6ad080" containerName="registry-server" probeResult="failure" output=< Oct 08 14:58:28 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 14:58:28 crc kubenswrapper[4624]: > Oct 08 14:58:29 crc kubenswrapper[4624]: I1008 14:58:29.686937 4624 generic.go:334] "Generic (PLEG): container finished" podID="9a215cb1-d735-42d4-9cc9-698fa1a61508" containerID="d33989a7ad46b7b6d17c00724ec988a3ab6b256f999e8506c844ee99c6f44c4e" exitCode=0 Oct 08 14:58:29 crc kubenswrapper[4624]: I1008 14:58:29.687020 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz" event={"ID":"9a215cb1-d735-42d4-9cc9-698fa1a61508","Type":"ContainerDied","Data":"d33989a7ad46b7b6d17c00724ec988a3ab6b256f999e8506c844ee99c6f44c4e"} Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.135539 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.256588 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a215cb1-d735-42d4-9cc9-698fa1a61508-ssh-key\") pod \"9a215cb1-d735-42d4-9cc9-698fa1a61508\" (UID: \"9a215cb1-d735-42d4-9cc9-698fa1a61508\") " Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.256818 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrfkp\" (UniqueName: \"kubernetes.io/projected/9a215cb1-d735-42d4-9cc9-698fa1a61508-kube-api-access-vrfkp\") pod \"9a215cb1-d735-42d4-9cc9-698fa1a61508\" (UID: \"9a215cb1-d735-42d4-9cc9-698fa1a61508\") " Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.256918 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a215cb1-d735-42d4-9cc9-698fa1a61508-inventory\") pod \"9a215cb1-d735-42d4-9cc9-698fa1a61508\" (UID: \"9a215cb1-d735-42d4-9cc9-698fa1a61508\") " Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.265886 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a215cb1-d735-42d4-9cc9-698fa1a61508-kube-api-access-vrfkp" (OuterVolumeSpecName: "kube-api-access-vrfkp") pod "9a215cb1-d735-42d4-9cc9-698fa1a61508" (UID: "9a215cb1-d735-42d4-9cc9-698fa1a61508"). InnerVolumeSpecName "kube-api-access-vrfkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.288334 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a215cb1-d735-42d4-9cc9-698fa1a61508-inventory" (OuterVolumeSpecName: "inventory") pod "9a215cb1-d735-42d4-9cc9-698fa1a61508" (UID: "9a215cb1-d735-42d4-9cc9-698fa1a61508"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.291733 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a215cb1-d735-42d4-9cc9-698fa1a61508-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9a215cb1-d735-42d4-9cc9-698fa1a61508" (UID: "9a215cb1-d735-42d4-9cc9-698fa1a61508"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.359726 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrfkp\" (UniqueName: \"kubernetes.io/projected/9a215cb1-d735-42d4-9cc9-698fa1a61508-kube-api-access-vrfkp\") on node \"crc\" DevicePath \"\"" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.359775 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a215cb1-d735-42d4-9cc9-698fa1a61508-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.359784 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a215cb1-d735-42d4-9cc9-698fa1a61508-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.714061 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz" event={"ID":"9a215cb1-d735-42d4-9cc9-698fa1a61508","Type":"ContainerDied","Data":"db5a8d0ea9acca3a4f9f39b0caaa7870a19bc197a05a362b5742da5c77bfc6a9"} Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.714401 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db5a8d0ea9acca3a4f9f39b0caaa7870a19bc197a05a362b5742da5c77bfc6a9" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.714121 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.809245 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p"] Oct 08 14:58:31 crc kubenswrapper[4624]: E1008 14:58:31.809685 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a215cb1-d735-42d4-9cc9-698fa1a61508" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.809703 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a215cb1-d735-42d4-9cc9-698fa1a61508" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.809930 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a215cb1-d735-42d4-9cc9-698fa1a61508" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.810710 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.821883 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.822111 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.824409 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.829985 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.830546 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.831156 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.831951 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.832703 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v6sxt" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.860601 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p"] Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.987065 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2h6g\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-kube-api-access-k2h6g\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.987132 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.987157 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.987211 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.987248 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.987286 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.987303 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.987343 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.987369 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.987401 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.987441 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.987459 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.987483 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:31 crc kubenswrapper[4624]: I1008 14:58:31.987512 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.089516 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.089592 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.089708 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.089765 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.089825 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2h6g\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-kube-api-access-k2h6g\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.089871 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.089899 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.089960 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.090010 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.090056 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.090080 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.090124 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.090159 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.090207 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.094972 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.095529 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.096890 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.096763 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.098555 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.099028 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.100661 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.103280 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.104046 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.105236 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.105908 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.106010 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.113291 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2h6g\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-kube-api-access-k2h6g\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.114371 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.129533 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.698765 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p"] Oct 08 14:58:32 crc kubenswrapper[4624]: W1008 14:58:32.704027 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35e4f3c9_b784_4780_bf62_c44be287ffef.slice/crio-f309e9e46685ec021eb27e69b48d72605e56c9f12ed30fc52f94abf63b2cdc9d WatchSource:0}: Error finding container f309e9e46685ec021eb27e69b48d72605e56c9f12ed30fc52f94abf63b2cdc9d: Status 404 returned error can't find the container with id f309e9e46685ec021eb27e69b48d72605e56c9f12ed30fc52f94abf63b2cdc9d Oct 08 14:58:32 crc kubenswrapper[4624]: I1008 14:58:32.725458 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" event={"ID":"35e4f3c9-b784-4780-bf62-c44be287ffef","Type":"ContainerStarted","Data":"f309e9e46685ec021eb27e69b48d72605e56c9f12ed30fc52f94abf63b2cdc9d"} Oct 08 14:58:33 crc kubenswrapper[4624]: I1008 14:58:33.742057 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" event={"ID":"35e4f3c9-b784-4780-bf62-c44be287ffef","Type":"ContainerStarted","Data":"2bdcf9d42a16357c8c5efc1dfe825a70e984f8fb7ba7296dca664eff55353460"} Oct 08 14:58:34 crc kubenswrapper[4624]: I1008 14:58:34.779590 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" podStartSLOduration=3.129551183 podStartE2EDuration="3.779558042s" podCreationTimestamp="2025-10-08 14:58:31 +0000 UTC" firstStartedPulling="2025-10-08 14:58:32.705876523 +0000 UTC m=+2137.856811600" lastFinishedPulling="2025-10-08 14:58:33.355883392 +0000 UTC m=+2138.506818459" observedRunningTime="2025-10-08 14:58:34.774818219 +0000 UTC m=+2139.925753286" watchObservedRunningTime="2025-10-08 14:58:34.779558042 +0000 UTC m=+2139.930493119" Oct 08 14:58:38 crc kubenswrapper[4624]: I1008 14:58:38.969020 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4lktp" podUID="42c0d703-d3e5-4e23-9963-4b79ee6ad080" containerName="registry-server" probeResult="failure" output=< Oct 08 14:58:38 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 14:58:38 crc kubenswrapper[4624]: > Oct 08 14:58:47 crc kubenswrapper[4624]: I1008 14:58:47.969542 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4lktp" Oct 08 14:58:48 crc kubenswrapper[4624]: I1008 14:58:48.027229 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4lktp" Oct 08 14:58:48 crc kubenswrapper[4624]: I1008 14:58:48.204773 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4lktp"] Oct 08 14:58:49 crc kubenswrapper[4624]: I1008 14:58:49.879851 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4lktp" podUID="42c0d703-d3e5-4e23-9963-4b79ee6ad080" containerName="registry-server" containerID="cri-o://66ee5eceb9214230d2c6f1bf90bd4e81125e04d366001782e26f709dca78fbb8" gracePeriod=2 Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.412366 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4lktp" Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.571685 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c0d703-d3e5-4e23-9963-4b79ee6ad080-utilities\") pod \"42c0d703-d3e5-4e23-9963-4b79ee6ad080\" (UID: \"42c0d703-d3e5-4e23-9963-4b79ee6ad080\") " Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.571749 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ndh4\" (UniqueName: \"kubernetes.io/projected/42c0d703-d3e5-4e23-9963-4b79ee6ad080-kube-api-access-2ndh4\") pod \"42c0d703-d3e5-4e23-9963-4b79ee6ad080\" (UID: \"42c0d703-d3e5-4e23-9963-4b79ee6ad080\") " Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.571856 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c0d703-d3e5-4e23-9963-4b79ee6ad080-catalog-content\") pod \"42c0d703-d3e5-4e23-9963-4b79ee6ad080\" (UID: \"42c0d703-d3e5-4e23-9963-4b79ee6ad080\") " Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.572591 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42c0d703-d3e5-4e23-9963-4b79ee6ad080-utilities" (OuterVolumeSpecName: "utilities") pod "42c0d703-d3e5-4e23-9963-4b79ee6ad080" (UID: "42c0d703-d3e5-4e23-9963-4b79ee6ad080"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.574253 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c0d703-d3e5-4e23-9963-4b79ee6ad080-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.578772 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c0d703-d3e5-4e23-9963-4b79ee6ad080-kube-api-access-2ndh4" (OuterVolumeSpecName: "kube-api-access-2ndh4") pod "42c0d703-d3e5-4e23-9963-4b79ee6ad080" (UID: "42c0d703-d3e5-4e23-9963-4b79ee6ad080"). InnerVolumeSpecName "kube-api-access-2ndh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.669204 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42c0d703-d3e5-4e23-9963-4b79ee6ad080-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42c0d703-d3e5-4e23-9963-4b79ee6ad080" (UID: "42c0d703-d3e5-4e23-9963-4b79ee6ad080"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.676461 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c0d703-d3e5-4e23-9963-4b79ee6ad080-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.676548 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ndh4\" (UniqueName: \"kubernetes.io/projected/42c0d703-d3e5-4e23-9963-4b79ee6ad080-kube-api-access-2ndh4\") on node \"crc\" DevicePath \"\"" Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.891337 4624 generic.go:334] "Generic (PLEG): container finished" podID="42c0d703-d3e5-4e23-9963-4b79ee6ad080" containerID="66ee5eceb9214230d2c6f1bf90bd4e81125e04d366001782e26f709dca78fbb8" exitCode=0 Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.891419 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4lktp" event={"ID":"42c0d703-d3e5-4e23-9963-4b79ee6ad080","Type":"ContainerDied","Data":"66ee5eceb9214230d2c6f1bf90bd4e81125e04d366001782e26f709dca78fbb8"} Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.891456 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4lktp" event={"ID":"42c0d703-d3e5-4e23-9963-4b79ee6ad080","Type":"ContainerDied","Data":"177b350d6da9d2ddedfa11cd06a9f8e87ff9024127860c947445595d2e7d50a7"} Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.891458 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4lktp" Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.891477 4624 scope.go:117] "RemoveContainer" containerID="66ee5eceb9214230d2c6f1bf90bd4e81125e04d366001782e26f709dca78fbb8" Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.922909 4624 scope.go:117] "RemoveContainer" containerID="2143e1e7df031aeaf34ed779808576887d464e23f7d19a1d8d9fe6b80805565a" Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.936459 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4lktp"] Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.944908 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4lktp"] Oct 08 14:58:50 crc kubenswrapper[4624]: I1008 14:58:50.965950 4624 scope.go:117] "RemoveContainer" containerID="3f90b5127598f6292bf0d284ee94bf86b83d166c04608085320b4865eb4d5ebe" Oct 08 14:58:51 crc kubenswrapper[4624]: I1008 14:58:51.006544 4624 scope.go:117] "RemoveContainer" containerID="66ee5eceb9214230d2c6f1bf90bd4e81125e04d366001782e26f709dca78fbb8" Oct 08 14:58:51 crc kubenswrapper[4624]: E1008 14:58:51.010147 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66ee5eceb9214230d2c6f1bf90bd4e81125e04d366001782e26f709dca78fbb8\": container with ID starting with 66ee5eceb9214230d2c6f1bf90bd4e81125e04d366001782e26f709dca78fbb8 not found: ID does not exist" containerID="66ee5eceb9214230d2c6f1bf90bd4e81125e04d366001782e26f709dca78fbb8" Oct 08 14:58:51 crc kubenswrapper[4624]: I1008 14:58:51.010402 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ee5eceb9214230d2c6f1bf90bd4e81125e04d366001782e26f709dca78fbb8"} err="failed to get container status \"66ee5eceb9214230d2c6f1bf90bd4e81125e04d366001782e26f709dca78fbb8\": rpc error: code = NotFound desc = could not find container \"66ee5eceb9214230d2c6f1bf90bd4e81125e04d366001782e26f709dca78fbb8\": container with ID starting with 66ee5eceb9214230d2c6f1bf90bd4e81125e04d366001782e26f709dca78fbb8 not found: ID does not exist" Oct 08 14:58:51 crc kubenswrapper[4624]: I1008 14:58:51.010554 4624 scope.go:117] "RemoveContainer" containerID="2143e1e7df031aeaf34ed779808576887d464e23f7d19a1d8d9fe6b80805565a" Oct 08 14:58:51 crc kubenswrapper[4624]: E1008 14:58:51.011196 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2143e1e7df031aeaf34ed779808576887d464e23f7d19a1d8d9fe6b80805565a\": container with ID starting with 2143e1e7df031aeaf34ed779808576887d464e23f7d19a1d8d9fe6b80805565a not found: ID does not exist" containerID="2143e1e7df031aeaf34ed779808576887d464e23f7d19a1d8d9fe6b80805565a" Oct 08 14:58:51 crc kubenswrapper[4624]: I1008 14:58:51.011245 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2143e1e7df031aeaf34ed779808576887d464e23f7d19a1d8d9fe6b80805565a"} err="failed to get container status \"2143e1e7df031aeaf34ed779808576887d464e23f7d19a1d8d9fe6b80805565a\": rpc error: code = NotFound desc = could not find container \"2143e1e7df031aeaf34ed779808576887d464e23f7d19a1d8d9fe6b80805565a\": container with ID starting with 2143e1e7df031aeaf34ed779808576887d464e23f7d19a1d8d9fe6b80805565a not found: ID does not exist" Oct 08 14:58:51 crc kubenswrapper[4624]: I1008 14:58:51.011292 4624 scope.go:117] "RemoveContainer" containerID="3f90b5127598f6292bf0d284ee94bf86b83d166c04608085320b4865eb4d5ebe" Oct 08 14:58:51 crc kubenswrapper[4624]: E1008 14:58:51.011749 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f90b5127598f6292bf0d284ee94bf86b83d166c04608085320b4865eb4d5ebe\": container with ID starting with 3f90b5127598f6292bf0d284ee94bf86b83d166c04608085320b4865eb4d5ebe not found: ID does not exist" containerID="3f90b5127598f6292bf0d284ee94bf86b83d166c04608085320b4865eb4d5ebe" Oct 08 14:58:51 crc kubenswrapper[4624]: I1008 14:58:51.011796 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f90b5127598f6292bf0d284ee94bf86b83d166c04608085320b4865eb4d5ebe"} err="failed to get container status \"3f90b5127598f6292bf0d284ee94bf86b83d166c04608085320b4865eb4d5ebe\": rpc error: code = NotFound desc = could not find container \"3f90b5127598f6292bf0d284ee94bf86b83d166c04608085320b4865eb4d5ebe\": container with ID starting with 3f90b5127598f6292bf0d284ee94bf86b83d166c04608085320b4865eb4d5ebe not found: ID does not exist" Oct 08 14:58:51 crc kubenswrapper[4624]: I1008 14:58:51.478406 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c0d703-d3e5-4e23-9963-4b79ee6ad080" path="/var/lib/kubelet/pods/42c0d703-d3e5-4e23-9963-4b79ee6ad080/volumes" Oct 08 14:59:16 crc kubenswrapper[4624]: I1008 14:59:16.129438 4624 generic.go:334] "Generic (PLEG): container finished" podID="35e4f3c9-b784-4780-bf62-c44be287ffef" containerID="2bdcf9d42a16357c8c5efc1dfe825a70e984f8fb7ba7296dca664eff55353460" exitCode=0 Oct 08 14:59:16 crc kubenswrapper[4624]: I1008 14:59:16.129538 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" event={"ID":"35e4f3c9-b784-4780-bf62-c44be287ffef","Type":"ContainerDied","Data":"2bdcf9d42a16357c8c5efc1dfe825a70e984f8fb7ba7296dca664eff55353460"} Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.625503 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.757293 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-bootstrap-combined-ca-bundle\") pod \"35e4f3c9-b784-4780-bf62-c44be287ffef\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.757343 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"35e4f3c9-b784-4780-bf62-c44be287ffef\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.757438 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-telemetry-combined-ca-bundle\") pod \"35e4f3c9-b784-4780-bf62-c44be287ffef\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.757458 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"35e4f3c9-b784-4780-bf62-c44be287ffef\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.757516 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-neutron-metadata-combined-ca-bundle\") pod \"35e4f3c9-b784-4780-bf62-c44be287ffef\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.757533 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-repo-setup-combined-ca-bundle\") pod \"35e4f3c9-b784-4780-bf62-c44be287ffef\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.757567 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-ovn-combined-ca-bundle\") pod \"35e4f3c9-b784-4780-bf62-c44be287ffef\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.757608 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-ssh-key\") pod \"35e4f3c9-b784-4780-bf62-c44be287ffef\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.757627 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"35e4f3c9-b784-4780-bf62-c44be287ffef\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.757686 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-inventory\") pod \"35e4f3c9-b784-4780-bf62-c44be287ffef\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.757737 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-nova-combined-ca-bundle\") pod \"35e4f3c9-b784-4780-bf62-c44be287ffef\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.757770 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2h6g\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-kube-api-access-k2h6g\") pod \"35e4f3c9-b784-4780-bf62-c44be287ffef\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.757794 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-libvirt-combined-ca-bundle\") pod \"35e4f3c9-b784-4780-bf62-c44be287ffef\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.757822 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-ovn-default-certs-0\") pod \"35e4f3c9-b784-4780-bf62-c44be287ffef\" (UID: \"35e4f3c9-b784-4780-bf62-c44be287ffef\") " Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.769699 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "35e4f3c9-b784-4780-bf62-c44be287ffef" (UID: "35e4f3c9-b784-4780-bf62-c44be287ffef"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.769761 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "35e4f3c9-b784-4780-bf62-c44be287ffef" (UID: "35e4f3c9-b784-4780-bf62-c44be287ffef"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.770136 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "35e4f3c9-b784-4780-bf62-c44be287ffef" (UID: "35e4f3c9-b784-4780-bf62-c44be287ffef"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.770236 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-kube-api-access-k2h6g" (OuterVolumeSpecName: "kube-api-access-k2h6g") pod "35e4f3c9-b784-4780-bf62-c44be287ffef" (UID: "35e4f3c9-b784-4780-bf62-c44be287ffef"). InnerVolumeSpecName "kube-api-access-k2h6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.770608 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "35e4f3c9-b784-4780-bf62-c44be287ffef" (UID: "35e4f3c9-b784-4780-bf62-c44be287ffef"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.770825 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "35e4f3c9-b784-4780-bf62-c44be287ffef" (UID: "35e4f3c9-b784-4780-bf62-c44be287ffef"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.771731 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "35e4f3c9-b784-4780-bf62-c44be287ffef" (UID: "35e4f3c9-b784-4780-bf62-c44be287ffef"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.773369 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "35e4f3c9-b784-4780-bf62-c44be287ffef" (UID: "35e4f3c9-b784-4780-bf62-c44be287ffef"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.776026 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "35e4f3c9-b784-4780-bf62-c44be287ffef" (UID: "35e4f3c9-b784-4780-bf62-c44be287ffef"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.777124 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "35e4f3c9-b784-4780-bf62-c44be287ffef" (UID: "35e4f3c9-b784-4780-bf62-c44be287ffef"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.779341 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "35e4f3c9-b784-4780-bf62-c44be287ffef" (UID: "35e4f3c9-b784-4780-bf62-c44be287ffef"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.780149 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "35e4f3c9-b784-4780-bf62-c44be287ffef" (UID: "35e4f3c9-b784-4780-bf62-c44be287ffef"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.796186 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "35e4f3c9-b784-4780-bf62-c44be287ffef" (UID: "35e4f3c9-b784-4780-bf62-c44be287ffef"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.808151 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-inventory" (OuterVolumeSpecName: "inventory") pod "35e4f3c9-b784-4780-bf62-c44be287ffef" (UID: "35e4f3c9-b784-4780-bf62-c44be287ffef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.859857 4624 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.859895 4624 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.859909 4624 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.859920 4624 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.859934 4624 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.859947 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.859960 4624 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.859973 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.859985 4624 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.859998 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2h6g\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-kube-api-access-k2h6g\") on node \"crc\" DevicePath \"\"" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.860008 4624 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.860018 4624 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.860029 4624 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e4f3c9-b784-4780-bf62-c44be287ffef-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 14:59:17 crc kubenswrapper[4624]: I1008 14:59:17.860041 4624 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/35e4f3c9-b784-4780-bf62-c44be287ffef-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.152043 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" event={"ID":"35e4f3c9-b784-4780-bf62-c44be287ffef","Type":"ContainerDied","Data":"f309e9e46685ec021eb27e69b48d72605e56c9f12ed30fc52f94abf63b2cdc9d"} Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.152084 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f309e9e46685ec021eb27e69b48d72605e56c9f12ed30fc52f94abf63b2cdc9d" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.152130 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.426424 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz"] Oct 08 14:59:18 crc kubenswrapper[4624]: E1008 14:59:18.426869 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c0d703-d3e5-4e23-9963-4b79ee6ad080" containerName="extract-content" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.426889 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c0d703-d3e5-4e23-9963-4b79ee6ad080" containerName="extract-content" Oct 08 14:59:18 crc kubenswrapper[4624]: E1008 14:59:18.426921 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c0d703-d3e5-4e23-9963-4b79ee6ad080" containerName="registry-server" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.426930 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c0d703-d3e5-4e23-9963-4b79ee6ad080" containerName="registry-server" Oct 08 14:59:18 crc kubenswrapper[4624]: E1008 14:59:18.426948 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c0d703-d3e5-4e23-9963-4b79ee6ad080" containerName="extract-utilities" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.426958 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c0d703-d3e5-4e23-9963-4b79ee6ad080" containerName="extract-utilities" Oct 08 14:59:18 crc kubenswrapper[4624]: E1008 14:59:18.426979 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35e4f3c9-b784-4780-bf62-c44be287ffef" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.426985 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="35e4f3c9-b784-4780-bf62-c44be287ffef" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.427156 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c0d703-d3e5-4e23-9963-4b79ee6ad080" containerName="registry-server" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.427172 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="35e4f3c9-b784-4780-bf62-c44be287ffef" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.427827 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.435557 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.441877 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v6sxt" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.445438 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz"] Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.447195 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.447515 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.447529 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.571116 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6b31442-c459-4d7e-b828-90ffe6a2eda5-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t84hz\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.571746 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d6b31442-c459-4d7e-b828-90ffe6a2eda5-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t84hz\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.571783 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d6b31442-c459-4d7e-b828-90ffe6a2eda5-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t84hz\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.571867 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbxww\" (UniqueName: \"kubernetes.io/projected/d6b31442-c459-4d7e-b828-90ffe6a2eda5-kube-api-access-xbxww\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t84hz\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.572001 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b31442-c459-4d7e-b828-90ffe6a2eda5-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t84hz\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.673867 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6b31442-c459-4d7e-b828-90ffe6a2eda5-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t84hz\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.673933 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d6b31442-c459-4d7e-b828-90ffe6a2eda5-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t84hz\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.673965 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d6b31442-c459-4d7e-b828-90ffe6a2eda5-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t84hz\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.674037 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbxww\" (UniqueName: \"kubernetes.io/projected/d6b31442-c459-4d7e-b828-90ffe6a2eda5-kube-api-access-xbxww\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t84hz\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.674141 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b31442-c459-4d7e-b828-90ffe6a2eda5-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t84hz\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.675114 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d6b31442-c459-4d7e-b828-90ffe6a2eda5-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t84hz\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.682428 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b31442-c459-4d7e-b828-90ffe6a2eda5-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t84hz\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.684808 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6b31442-c459-4d7e-b828-90ffe6a2eda5-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t84hz\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.690960 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d6b31442-c459-4d7e-b828-90ffe6a2eda5-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t84hz\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.694335 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbxww\" (UniqueName: \"kubernetes.io/projected/d6b31442-c459-4d7e-b828-90ffe6a2eda5-kube-api-access-xbxww\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t84hz\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 14:59:18 crc kubenswrapper[4624]: I1008 14:59:18.744024 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 14:59:19 crc kubenswrapper[4624]: W1008 14:59:19.287893 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6b31442_c459_4d7e_b828_90ffe6a2eda5.slice/crio-25f2b4b09c92a4b2dece1723c9a9b9bd0a5bf2a067d68836a2411359093ab4f2 WatchSource:0}: Error finding container 25f2b4b09c92a4b2dece1723c9a9b9bd0a5bf2a067d68836a2411359093ab4f2: Status 404 returned error can't find the container with id 25f2b4b09c92a4b2dece1723c9a9b9bd0a5bf2a067d68836a2411359093ab4f2 Oct 08 14:59:19 crc kubenswrapper[4624]: I1008 14:59:19.294745 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz"] Oct 08 14:59:20 crc kubenswrapper[4624]: I1008 14:59:20.169299 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" event={"ID":"d6b31442-c459-4d7e-b828-90ffe6a2eda5","Type":"ContainerStarted","Data":"25f2b4b09c92a4b2dece1723c9a9b9bd0a5bf2a067d68836a2411359093ab4f2"} Oct 08 14:59:21 crc kubenswrapper[4624]: I1008 14:59:21.179288 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" event={"ID":"d6b31442-c459-4d7e-b828-90ffe6a2eda5","Type":"ContainerStarted","Data":"1faf83ed33481685f05e8c7b4bedc28e0d679d887471eccc490f4d5118fd5d8e"} Oct 08 14:59:21 crc kubenswrapper[4624]: I1008 14:59:21.199660 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" podStartSLOduration=2.311924562 podStartE2EDuration="3.19962262s" podCreationTimestamp="2025-10-08 14:59:18 +0000 UTC" firstStartedPulling="2025-10-08 14:59:19.290567183 +0000 UTC m=+2184.441502270" lastFinishedPulling="2025-10-08 14:59:20.178265251 +0000 UTC m=+2185.329200328" observedRunningTime="2025-10-08 14:59:21.198656883 +0000 UTC m=+2186.349591980" watchObservedRunningTime="2025-10-08 14:59:21.19962262 +0000 UTC m=+2186.350557697" Oct 08 14:59:30 crc kubenswrapper[4624]: I1008 14:59:30.076357 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 14:59:30 crc kubenswrapper[4624]: I1008 14:59:30.076971 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:00:00 crc kubenswrapper[4624]: I1008 15:00:00.076512 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:00:00 crc kubenswrapper[4624]: I1008 15:00:00.077182 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:00:00 crc kubenswrapper[4624]: I1008 15:00:00.161501 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv"] Oct 08 15:00:00 crc kubenswrapper[4624]: I1008 15:00:00.163095 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv" Oct 08 15:00:00 crc kubenswrapper[4624]: I1008 15:00:00.165414 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 15:00:00 crc kubenswrapper[4624]: I1008 15:00:00.166039 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 15:00:00 crc kubenswrapper[4624]: I1008 15:00:00.189488 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv"] Oct 08 15:00:00 crc kubenswrapper[4624]: I1008 15:00:00.323240 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/384759c6-b8ef-441a-92c4-3cbedbd9359e-config-volume\") pod \"collect-profiles-29332260-8x5jv\" (UID: \"384759c6-b8ef-441a-92c4-3cbedbd9359e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv" Oct 08 15:00:00 crc kubenswrapper[4624]: I1008 15:00:00.323416 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlltt\" (UniqueName: \"kubernetes.io/projected/384759c6-b8ef-441a-92c4-3cbedbd9359e-kube-api-access-mlltt\") pod \"collect-profiles-29332260-8x5jv\" (UID: \"384759c6-b8ef-441a-92c4-3cbedbd9359e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv" Oct 08 15:00:00 crc kubenswrapper[4624]: I1008 15:00:00.323575 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/384759c6-b8ef-441a-92c4-3cbedbd9359e-secret-volume\") pod \"collect-profiles-29332260-8x5jv\" (UID: \"384759c6-b8ef-441a-92c4-3cbedbd9359e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv" Oct 08 15:00:00 crc kubenswrapper[4624]: I1008 15:00:00.425778 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/384759c6-b8ef-441a-92c4-3cbedbd9359e-config-volume\") pod \"collect-profiles-29332260-8x5jv\" (UID: \"384759c6-b8ef-441a-92c4-3cbedbd9359e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv" Oct 08 15:00:00 crc kubenswrapper[4624]: I1008 15:00:00.425900 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlltt\" (UniqueName: \"kubernetes.io/projected/384759c6-b8ef-441a-92c4-3cbedbd9359e-kube-api-access-mlltt\") pod \"collect-profiles-29332260-8x5jv\" (UID: \"384759c6-b8ef-441a-92c4-3cbedbd9359e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv" Oct 08 15:00:00 crc kubenswrapper[4624]: I1008 15:00:00.425946 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/384759c6-b8ef-441a-92c4-3cbedbd9359e-secret-volume\") pod \"collect-profiles-29332260-8x5jv\" (UID: \"384759c6-b8ef-441a-92c4-3cbedbd9359e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv" Oct 08 15:00:00 crc kubenswrapper[4624]: I1008 15:00:00.426832 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/384759c6-b8ef-441a-92c4-3cbedbd9359e-config-volume\") pod \"collect-profiles-29332260-8x5jv\" (UID: \"384759c6-b8ef-441a-92c4-3cbedbd9359e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv" Oct 08 15:00:00 crc kubenswrapper[4624]: I1008 15:00:00.439519 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/384759c6-b8ef-441a-92c4-3cbedbd9359e-secret-volume\") pod \"collect-profiles-29332260-8x5jv\" (UID: \"384759c6-b8ef-441a-92c4-3cbedbd9359e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv" Oct 08 15:00:00 crc kubenswrapper[4624]: I1008 15:00:00.447515 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlltt\" (UniqueName: \"kubernetes.io/projected/384759c6-b8ef-441a-92c4-3cbedbd9359e-kube-api-access-mlltt\") pod \"collect-profiles-29332260-8x5jv\" (UID: \"384759c6-b8ef-441a-92c4-3cbedbd9359e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv" Oct 08 15:00:00 crc kubenswrapper[4624]: I1008 15:00:00.503326 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv" Oct 08 15:00:00 crc kubenswrapper[4624]: I1008 15:00:00.998296 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv"] Oct 08 15:00:01 crc kubenswrapper[4624]: I1008 15:00:01.560884 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv" event={"ID":"384759c6-b8ef-441a-92c4-3cbedbd9359e","Type":"ContainerStarted","Data":"710926f54f87ae9807ed3e3be3d4a3687a70de3185737acbb7da4fec7d85d214"} Oct 08 15:00:01 crc kubenswrapper[4624]: I1008 15:00:01.561248 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv" event={"ID":"384759c6-b8ef-441a-92c4-3cbedbd9359e","Type":"ContainerStarted","Data":"eaf918d1f6401dc8843215c784f65f3fecaab99f48a3197f0aabaff54d8d8326"} Oct 08 15:00:01 crc kubenswrapper[4624]: I1008 15:00:01.581944 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv" podStartSLOduration=1.581909983 podStartE2EDuration="1.581909983s" podCreationTimestamp="2025-10-08 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 15:00:01.581698997 +0000 UTC m=+2226.732634084" watchObservedRunningTime="2025-10-08 15:00:01.581909983 +0000 UTC m=+2226.732845080" Oct 08 15:00:02 crc kubenswrapper[4624]: I1008 15:00:02.570762 4624 generic.go:334] "Generic (PLEG): container finished" podID="384759c6-b8ef-441a-92c4-3cbedbd9359e" containerID="710926f54f87ae9807ed3e3be3d4a3687a70de3185737acbb7da4fec7d85d214" exitCode=0 Oct 08 15:00:02 crc kubenswrapper[4624]: I1008 15:00:02.570840 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv" event={"ID":"384759c6-b8ef-441a-92c4-3cbedbd9359e","Type":"ContainerDied","Data":"710926f54f87ae9807ed3e3be3d4a3687a70de3185737acbb7da4fec7d85d214"} Oct 08 15:00:03 crc kubenswrapper[4624]: I1008 15:00:03.960715 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv" Oct 08 15:00:04 crc kubenswrapper[4624]: I1008 15:00:04.105300 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/384759c6-b8ef-441a-92c4-3cbedbd9359e-config-volume\") pod \"384759c6-b8ef-441a-92c4-3cbedbd9359e\" (UID: \"384759c6-b8ef-441a-92c4-3cbedbd9359e\") " Oct 08 15:00:04 crc kubenswrapper[4624]: I1008 15:00:04.105352 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlltt\" (UniqueName: \"kubernetes.io/projected/384759c6-b8ef-441a-92c4-3cbedbd9359e-kube-api-access-mlltt\") pod \"384759c6-b8ef-441a-92c4-3cbedbd9359e\" (UID: \"384759c6-b8ef-441a-92c4-3cbedbd9359e\") " Oct 08 15:00:04 crc kubenswrapper[4624]: I1008 15:00:04.105572 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/384759c6-b8ef-441a-92c4-3cbedbd9359e-secret-volume\") pod \"384759c6-b8ef-441a-92c4-3cbedbd9359e\" (UID: \"384759c6-b8ef-441a-92c4-3cbedbd9359e\") " Oct 08 15:00:04 crc kubenswrapper[4624]: I1008 15:00:04.107551 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/384759c6-b8ef-441a-92c4-3cbedbd9359e-config-volume" (OuterVolumeSpecName: "config-volume") pod "384759c6-b8ef-441a-92c4-3cbedbd9359e" (UID: "384759c6-b8ef-441a-92c4-3cbedbd9359e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 15:00:04 crc kubenswrapper[4624]: I1008 15:00:04.112858 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/384759c6-b8ef-441a-92c4-3cbedbd9359e-kube-api-access-mlltt" (OuterVolumeSpecName: "kube-api-access-mlltt") pod "384759c6-b8ef-441a-92c4-3cbedbd9359e" (UID: "384759c6-b8ef-441a-92c4-3cbedbd9359e"). InnerVolumeSpecName "kube-api-access-mlltt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:00:04 crc kubenswrapper[4624]: I1008 15:00:04.113558 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/384759c6-b8ef-441a-92c4-3cbedbd9359e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "384759c6-b8ef-441a-92c4-3cbedbd9359e" (UID: "384759c6-b8ef-441a-92c4-3cbedbd9359e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:00:04 crc kubenswrapper[4624]: I1008 15:00:04.208062 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/384759c6-b8ef-441a-92c4-3cbedbd9359e-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 15:00:04 crc kubenswrapper[4624]: I1008 15:00:04.208116 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/384759c6-b8ef-441a-92c4-3cbedbd9359e-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 15:00:04 crc kubenswrapper[4624]: I1008 15:00:04.208132 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlltt\" (UniqueName: \"kubernetes.io/projected/384759c6-b8ef-441a-92c4-3cbedbd9359e-kube-api-access-mlltt\") on node \"crc\" DevicePath \"\"" Oct 08 15:00:04 crc kubenswrapper[4624]: I1008 15:00:04.595004 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv" event={"ID":"384759c6-b8ef-441a-92c4-3cbedbd9359e","Type":"ContainerDied","Data":"eaf918d1f6401dc8843215c784f65f3fecaab99f48a3197f0aabaff54d8d8326"} Oct 08 15:00:04 crc kubenswrapper[4624]: I1008 15:00:04.595051 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv" Oct 08 15:00:04 crc kubenswrapper[4624]: I1008 15:00:04.595087 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaf918d1f6401dc8843215c784f65f3fecaab99f48a3197f0aabaff54d8d8326" Oct 08 15:00:04 crc kubenswrapper[4624]: I1008 15:00:04.655022 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll"] Oct 08 15:00:04 crc kubenswrapper[4624]: I1008 15:00:04.663264 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332215-bxfll"] Oct 08 15:00:05 crc kubenswrapper[4624]: I1008 15:00:05.479866 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df0cea7a-1f29-4af5-b8f1-2e1c8873610c" path="/var/lib/kubelet/pods/df0cea7a-1f29-4af5-b8f1-2e1c8873610c/volumes" Oct 08 15:00:29 crc kubenswrapper[4624]: I1008 15:00:29.821758 4624 generic.go:334] "Generic (PLEG): container finished" podID="d6b31442-c459-4d7e-b828-90ffe6a2eda5" containerID="1faf83ed33481685f05e8c7b4bedc28e0d679d887471eccc490f4d5118fd5d8e" exitCode=0 Oct 08 15:00:29 crc kubenswrapper[4624]: I1008 15:00:29.822467 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" event={"ID":"d6b31442-c459-4d7e-b828-90ffe6a2eda5","Type":"ContainerDied","Data":"1faf83ed33481685f05e8c7b4bedc28e0d679d887471eccc490f4d5118fd5d8e"} Oct 08 15:00:30 crc kubenswrapper[4624]: I1008 15:00:30.076791 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:00:30 crc kubenswrapper[4624]: I1008 15:00:30.077143 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:00:30 crc kubenswrapper[4624]: I1008 15:00:30.077186 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 15:00:30 crc kubenswrapper[4624]: I1008 15:00:30.090743 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 15:00:30 crc kubenswrapper[4624]: I1008 15:00:30.090866 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" gracePeriod=600 Oct 08 15:00:30 crc kubenswrapper[4624]: E1008 15:00:30.224702 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:00:30 crc kubenswrapper[4624]: I1008 15:00:30.835972 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" exitCode=0 Oct 08 15:00:30 crc kubenswrapper[4624]: I1008 15:00:30.836040 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927"} Oct 08 15:00:30 crc kubenswrapper[4624]: I1008 15:00:30.836091 4624 scope.go:117] "RemoveContainer" containerID="5ea1ca72b798c395b4e373d87fbab7799599872cd0c0147a7af45b5d45ba60f8" Oct 08 15:00:30 crc kubenswrapper[4624]: I1008 15:00:30.836778 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:00:30 crc kubenswrapper[4624]: E1008 15:00:30.837112 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.264053 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.354620 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbxww\" (UniqueName: \"kubernetes.io/projected/d6b31442-c459-4d7e-b828-90ffe6a2eda5-kube-api-access-xbxww\") pod \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.354688 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6b31442-c459-4d7e-b828-90ffe6a2eda5-inventory\") pod \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.354748 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d6b31442-c459-4d7e-b828-90ffe6a2eda5-ovncontroller-config-0\") pod \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.354778 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d6b31442-c459-4d7e-b828-90ffe6a2eda5-ssh-key\") pod \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.355594 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b31442-c459-4d7e-b828-90ffe6a2eda5-ovn-combined-ca-bundle\") pod \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\" (UID: \"d6b31442-c459-4d7e-b828-90ffe6a2eda5\") " Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.360135 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b31442-c459-4d7e-b828-90ffe6a2eda5-kube-api-access-xbxww" (OuterVolumeSpecName: "kube-api-access-xbxww") pod "d6b31442-c459-4d7e-b828-90ffe6a2eda5" (UID: "d6b31442-c459-4d7e-b828-90ffe6a2eda5"). InnerVolumeSpecName "kube-api-access-xbxww". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.363834 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b31442-c459-4d7e-b828-90ffe6a2eda5-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d6b31442-c459-4d7e-b828-90ffe6a2eda5" (UID: "d6b31442-c459-4d7e-b828-90ffe6a2eda5"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.381099 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6b31442-c459-4d7e-b828-90ffe6a2eda5-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "d6b31442-c459-4d7e-b828-90ffe6a2eda5" (UID: "d6b31442-c459-4d7e-b828-90ffe6a2eda5"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.389861 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b31442-c459-4d7e-b828-90ffe6a2eda5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d6b31442-c459-4d7e-b828-90ffe6a2eda5" (UID: "d6b31442-c459-4d7e-b828-90ffe6a2eda5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.393057 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b31442-c459-4d7e-b828-90ffe6a2eda5-inventory" (OuterVolumeSpecName: "inventory") pod "d6b31442-c459-4d7e-b828-90ffe6a2eda5" (UID: "d6b31442-c459-4d7e-b828-90ffe6a2eda5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.458026 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbxww\" (UniqueName: \"kubernetes.io/projected/d6b31442-c459-4d7e-b828-90ffe6a2eda5-kube-api-access-xbxww\") on node \"crc\" DevicePath \"\"" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.458072 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6b31442-c459-4d7e-b828-90ffe6a2eda5-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.458085 4624 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d6b31442-c459-4d7e-b828-90ffe6a2eda5-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.458096 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d6b31442-c459-4d7e-b828-90ffe6a2eda5-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.458106 4624 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b31442-c459-4d7e-b828-90ffe6a2eda5-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.848850 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" event={"ID":"d6b31442-c459-4d7e-b828-90ffe6a2eda5","Type":"ContainerDied","Data":"25f2b4b09c92a4b2dece1723c9a9b9bd0a5bf2a067d68836a2411359093ab4f2"} Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.849193 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25f2b4b09c92a4b2dece1723c9a9b9bd0a5bf2a067d68836a2411359093ab4f2" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.848892 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t84hz" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.986042 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x"] Oct 08 15:00:31 crc kubenswrapper[4624]: E1008 15:00:31.986515 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="384759c6-b8ef-441a-92c4-3cbedbd9359e" containerName="collect-profiles" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.986534 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="384759c6-b8ef-441a-92c4-3cbedbd9359e" containerName="collect-profiles" Oct 08 15:00:31 crc kubenswrapper[4624]: E1008 15:00:31.986579 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b31442-c459-4d7e-b828-90ffe6a2eda5" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.986589 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b31442-c459-4d7e-b828-90ffe6a2eda5" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.990109 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="384759c6-b8ef-441a-92c4-3cbedbd9359e" containerName="collect-profiles" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.990170 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6b31442-c459-4d7e-b828-90ffe6a2eda5" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.991023 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.996850 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.997117 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.997181 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v6sxt" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.997046 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.997053 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 08 15:00:31 crc kubenswrapper[4624]: I1008 15:00:31.997576 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.034869 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x"] Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.068384 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.068513 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.068558 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.068659 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.068734 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.068786 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26qt6\" (UniqueName: \"kubernetes.io/projected/95840382-8211-4637-92f6-8316e3e751c6-kube-api-access-26qt6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.170289 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.170365 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26qt6\" (UniqueName: \"kubernetes.io/projected/95840382-8211-4637-92f6-8316e3e751c6-kube-api-access-26qt6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.170397 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.170446 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.170485 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.170584 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.175364 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.175489 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.176760 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.178843 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.179623 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.191295 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26qt6\" (UniqueName: \"kubernetes.io/projected/95840382-8211-4637-92f6-8316e3e751c6-kube-api-access-26qt6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.318237 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:00:32 crc kubenswrapper[4624]: I1008 15:00:32.851284 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x"] Oct 08 15:00:33 crc kubenswrapper[4624]: I1008 15:00:33.868546 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" event={"ID":"95840382-8211-4637-92f6-8316e3e751c6","Type":"ContainerStarted","Data":"54bae758b9e51721e0b8e866cc40b7e95e1fdaf5a1ea38149b955e855d7760b2"} Oct 08 15:00:34 crc kubenswrapper[4624]: I1008 15:00:34.878149 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" event={"ID":"95840382-8211-4637-92f6-8316e3e751c6","Type":"ContainerStarted","Data":"10c0b3022866f08b24464554ddd24828087c3cc2e5e77dc7556fc80878f4e417"} Oct 08 15:00:34 crc kubenswrapper[4624]: I1008 15:00:34.902810 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" podStartSLOduration=2.582247175 podStartE2EDuration="3.90278859s" podCreationTimestamp="2025-10-08 15:00:31 +0000 UTC" firstStartedPulling="2025-10-08 15:00:32.869686884 +0000 UTC m=+2258.020621961" lastFinishedPulling="2025-10-08 15:00:34.190228299 +0000 UTC m=+2259.341163376" observedRunningTime="2025-10-08 15:00:34.893255792 +0000 UTC m=+2260.044190859" watchObservedRunningTime="2025-10-08 15:00:34.90278859 +0000 UTC m=+2260.053723667" Oct 08 15:00:41 crc kubenswrapper[4624]: I1008 15:00:41.896359 4624 scope.go:117] "RemoveContainer" containerID="407252a5d7e1f7ee74cb8d86902251b8c725122f417b8798f79afee34f08f67a" Oct 08 15:00:43 crc kubenswrapper[4624]: I1008 15:00:43.478026 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:00:43 crc kubenswrapper[4624]: E1008 15:00:43.480096 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:00:55 crc kubenswrapper[4624]: I1008 15:00:55.473218 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:00:55 crc kubenswrapper[4624]: E1008 15:00:55.473985 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:01:00 crc kubenswrapper[4624]: I1008 15:01:00.144043 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29332261-glg7h"] Oct 08 15:01:00 crc kubenswrapper[4624]: I1008 15:01:00.146210 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29332261-glg7h" Oct 08 15:01:00 crc kubenswrapper[4624]: I1008 15:01:00.166802 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29332261-glg7h"] Oct 08 15:01:00 crc kubenswrapper[4624]: I1008 15:01:00.252755 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f247107e-e524-46ed-891c-4ceae5377acd-config-data\") pod \"keystone-cron-29332261-glg7h\" (UID: \"f247107e-e524-46ed-891c-4ceae5377acd\") " pod="openstack/keystone-cron-29332261-glg7h" Oct 08 15:01:00 crc kubenswrapper[4624]: I1008 15:01:00.252807 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdbt5\" (UniqueName: \"kubernetes.io/projected/f247107e-e524-46ed-891c-4ceae5377acd-kube-api-access-rdbt5\") pod \"keystone-cron-29332261-glg7h\" (UID: \"f247107e-e524-46ed-891c-4ceae5377acd\") " pod="openstack/keystone-cron-29332261-glg7h" Oct 08 15:01:00 crc kubenswrapper[4624]: I1008 15:01:00.252832 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f247107e-e524-46ed-891c-4ceae5377acd-combined-ca-bundle\") pod \"keystone-cron-29332261-glg7h\" (UID: \"f247107e-e524-46ed-891c-4ceae5377acd\") " pod="openstack/keystone-cron-29332261-glg7h" Oct 08 15:01:00 crc kubenswrapper[4624]: I1008 15:01:00.252874 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f247107e-e524-46ed-891c-4ceae5377acd-fernet-keys\") pod \"keystone-cron-29332261-glg7h\" (UID: \"f247107e-e524-46ed-891c-4ceae5377acd\") " pod="openstack/keystone-cron-29332261-glg7h" Oct 08 15:01:00 crc kubenswrapper[4624]: I1008 15:01:00.355138 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f247107e-e524-46ed-891c-4ceae5377acd-config-data\") pod \"keystone-cron-29332261-glg7h\" (UID: \"f247107e-e524-46ed-891c-4ceae5377acd\") " pod="openstack/keystone-cron-29332261-glg7h" Oct 08 15:01:00 crc kubenswrapper[4624]: I1008 15:01:00.355545 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdbt5\" (UniqueName: \"kubernetes.io/projected/f247107e-e524-46ed-891c-4ceae5377acd-kube-api-access-rdbt5\") pod \"keystone-cron-29332261-glg7h\" (UID: \"f247107e-e524-46ed-891c-4ceae5377acd\") " pod="openstack/keystone-cron-29332261-glg7h" Oct 08 15:01:00 crc kubenswrapper[4624]: I1008 15:01:00.355586 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f247107e-e524-46ed-891c-4ceae5377acd-combined-ca-bundle\") pod \"keystone-cron-29332261-glg7h\" (UID: \"f247107e-e524-46ed-891c-4ceae5377acd\") " pod="openstack/keystone-cron-29332261-glg7h" Oct 08 15:01:00 crc kubenswrapper[4624]: I1008 15:01:00.355839 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f247107e-e524-46ed-891c-4ceae5377acd-fernet-keys\") pod \"keystone-cron-29332261-glg7h\" (UID: \"f247107e-e524-46ed-891c-4ceae5377acd\") " pod="openstack/keystone-cron-29332261-glg7h" Oct 08 15:01:00 crc kubenswrapper[4624]: I1008 15:01:00.361937 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f247107e-e524-46ed-891c-4ceae5377acd-combined-ca-bundle\") pod \"keystone-cron-29332261-glg7h\" (UID: \"f247107e-e524-46ed-891c-4ceae5377acd\") " pod="openstack/keystone-cron-29332261-glg7h" Oct 08 15:01:00 crc kubenswrapper[4624]: I1008 15:01:00.362835 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f247107e-e524-46ed-891c-4ceae5377acd-config-data\") pod \"keystone-cron-29332261-glg7h\" (UID: \"f247107e-e524-46ed-891c-4ceae5377acd\") " pod="openstack/keystone-cron-29332261-glg7h" Oct 08 15:01:00 crc kubenswrapper[4624]: I1008 15:01:00.363521 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f247107e-e524-46ed-891c-4ceae5377acd-fernet-keys\") pod \"keystone-cron-29332261-glg7h\" (UID: \"f247107e-e524-46ed-891c-4ceae5377acd\") " pod="openstack/keystone-cron-29332261-glg7h" Oct 08 15:01:00 crc kubenswrapper[4624]: I1008 15:01:00.377356 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdbt5\" (UniqueName: \"kubernetes.io/projected/f247107e-e524-46ed-891c-4ceae5377acd-kube-api-access-rdbt5\") pod \"keystone-cron-29332261-glg7h\" (UID: \"f247107e-e524-46ed-891c-4ceae5377acd\") " pod="openstack/keystone-cron-29332261-glg7h" Oct 08 15:01:00 crc kubenswrapper[4624]: I1008 15:01:00.511320 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29332261-glg7h" Oct 08 15:01:01 crc kubenswrapper[4624]: I1008 15:01:01.016547 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29332261-glg7h"] Oct 08 15:01:01 crc kubenswrapper[4624]: I1008 15:01:01.120780 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29332261-glg7h" event={"ID":"f247107e-e524-46ed-891c-4ceae5377acd","Type":"ContainerStarted","Data":"d99f55e5ea65b6d53a76669e4248e592954b3005c1925d1ff016280e9c027483"} Oct 08 15:01:02 crc kubenswrapper[4624]: I1008 15:01:02.130911 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29332261-glg7h" event={"ID":"f247107e-e524-46ed-891c-4ceae5377acd","Type":"ContainerStarted","Data":"c1da56ab93f5c12ff3ad0d9b03a403f83efdfdf389e782d6e96a56c1de06eaa4"} Oct 08 15:01:02 crc kubenswrapper[4624]: I1008 15:01:02.152505 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29332261-glg7h" podStartSLOduration=2.152488523 podStartE2EDuration="2.152488523s" podCreationTimestamp="2025-10-08 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 15:01:02.147827761 +0000 UTC m=+2287.298762838" watchObservedRunningTime="2025-10-08 15:01:02.152488523 +0000 UTC m=+2287.303423600" Oct 08 15:01:06 crc kubenswrapper[4624]: I1008 15:01:06.168416 4624 generic.go:334] "Generic (PLEG): container finished" podID="f247107e-e524-46ed-891c-4ceae5377acd" containerID="c1da56ab93f5c12ff3ad0d9b03a403f83efdfdf389e782d6e96a56c1de06eaa4" exitCode=0 Oct 08 15:01:06 crc kubenswrapper[4624]: I1008 15:01:06.168466 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29332261-glg7h" event={"ID":"f247107e-e524-46ed-891c-4ceae5377acd","Type":"ContainerDied","Data":"c1da56ab93f5c12ff3ad0d9b03a403f83efdfdf389e782d6e96a56c1de06eaa4"} Oct 08 15:01:07 crc kubenswrapper[4624]: I1008 15:01:07.518780 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29332261-glg7h" Oct 08 15:01:07 crc kubenswrapper[4624]: I1008 15:01:07.596584 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f247107e-e524-46ed-891c-4ceae5377acd-config-data\") pod \"f247107e-e524-46ed-891c-4ceae5377acd\" (UID: \"f247107e-e524-46ed-891c-4ceae5377acd\") " Oct 08 15:01:07 crc kubenswrapper[4624]: I1008 15:01:07.596745 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdbt5\" (UniqueName: \"kubernetes.io/projected/f247107e-e524-46ed-891c-4ceae5377acd-kube-api-access-rdbt5\") pod \"f247107e-e524-46ed-891c-4ceae5377acd\" (UID: \"f247107e-e524-46ed-891c-4ceae5377acd\") " Oct 08 15:01:07 crc kubenswrapper[4624]: I1008 15:01:07.596900 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f247107e-e524-46ed-891c-4ceae5377acd-combined-ca-bundle\") pod \"f247107e-e524-46ed-891c-4ceae5377acd\" (UID: \"f247107e-e524-46ed-891c-4ceae5377acd\") " Oct 08 15:01:07 crc kubenswrapper[4624]: I1008 15:01:07.596963 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f247107e-e524-46ed-891c-4ceae5377acd-fernet-keys\") pod \"f247107e-e524-46ed-891c-4ceae5377acd\" (UID: \"f247107e-e524-46ed-891c-4ceae5377acd\") " Oct 08 15:01:07 crc kubenswrapper[4624]: I1008 15:01:07.603184 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f247107e-e524-46ed-891c-4ceae5377acd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f247107e-e524-46ed-891c-4ceae5377acd" (UID: "f247107e-e524-46ed-891c-4ceae5377acd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:01:07 crc kubenswrapper[4624]: I1008 15:01:07.604909 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f247107e-e524-46ed-891c-4ceae5377acd-kube-api-access-rdbt5" (OuterVolumeSpecName: "kube-api-access-rdbt5") pod "f247107e-e524-46ed-891c-4ceae5377acd" (UID: "f247107e-e524-46ed-891c-4ceae5377acd"). InnerVolumeSpecName "kube-api-access-rdbt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:01:07 crc kubenswrapper[4624]: I1008 15:01:07.638012 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f247107e-e524-46ed-891c-4ceae5377acd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f247107e-e524-46ed-891c-4ceae5377acd" (UID: "f247107e-e524-46ed-891c-4ceae5377acd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:01:07 crc kubenswrapper[4624]: I1008 15:01:07.670246 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f247107e-e524-46ed-891c-4ceae5377acd-config-data" (OuterVolumeSpecName: "config-data") pod "f247107e-e524-46ed-891c-4ceae5377acd" (UID: "f247107e-e524-46ed-891c-4ceae5377acd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:01:07 crc kubenswrapper[4624]: I1008 15:01:07.699809 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f247107e-e524-46ed-891c-4ceae5377acd-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 15:01:07 crc kubenswrapper[4624]: I1008 15:01:07.700007 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdbt5\" (UniqueName: \"kubernetes.io/projected/f247107e-e524-46ed-891c-4ceae5377acd-kube-api-access-rdbt5\") on node \"crc\" DevicePath \"\"" Oct 08 15:01:07 crc kubenswrapper[4624]: I1008 15:01:07.700026 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f247107e-e524-46ed-891c-4ceae5377acd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 15:01:07 crc kubenswrapper[4624]: I1008 15:01:07.700036 4624 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f247107e-e524-46ed-891c-4ceae5377acd-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 08 15:01:08 crc kubenswrapper[4624]: I1008 15:01:08.188749 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29332261-glg7h" event={"ID":"f247107e-e524-46ed-891c-4ceae5377acd","Type":"ContainerDied","Data":"d99f55e5ea65b6d53a76669e4248e592954b3005c1925d1ff016280e9c027483"} Oct 08 15:01:08 crc kubenswrapper[4624]: I1008 15:01:08.188805 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d99f55e5ea65b6d53a76669e4248e592954b3005c1925d1ff016280e9c027483" Oct 08 15:01:08 crc kubenswrapper[4624]: I1008 15:01:08.189176 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29332261-glg7h" Oct 08 15:01:09 crc kubenswrapper[4624]: I1008 15:01:09.466411 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:01:09 crc kubenswrapper[4624]: E1008 15:01:09.466928 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:01:24 crc kubenswrapper[4624]: I1008 15:01:24.467694 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:01:24 crc kubenswrapper[4624]: E1008 15:01:24.468589 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:01:25 crc kubenswrapper[4624]: I1008 15:01:25.323451 4624 generic.go:334] "Generic (PLEG): container finished" podID="95840382-8211-4637-92f6-8316e3e751c6" containerID="10c0b3022866f08b24464554ddd24828087c3cc2e5e77dc7556fc80878f4e417" exitCode=0 Oct 08 15:01:25 crc kubenswrapper[4624]: I1008 15:01:25.323489 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" event={"ID":"95840382-8211-4637-92f6-8316e3e751c6","Type":"ContainerDied","Data":"10c0b3022866f08b24464554ddd24828087c3cc2e5e77dc7556fc80878f4e417"} Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.020694 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.087186 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"95840382-8211-4637-92f6-8316e3e751c6\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.087241 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-inventory\") pod \"95840382-8211-4637-92f6-8316e3e751c6\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.087313 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-neutron-metadata-combined-ca-bundle\") pod \"95840382-8211-4637-92f6-8316e3e751c6\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.087488 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-ssh-key\") pod \"95840382-8211-4637-92f6-8316e3e751c6\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.087525 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-nova-metadata-neutron-config-0\") pod \"95840382-8211-4637-92f6-8316e3e751c6\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.087589 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26qt6\" (UniqueName: \"kubernetes.io/projected/95840382-8211-4637-92f6-8316e3e751c6-kube-api-access-26qt6\") pod \"95840382-8211-4637-92f6-8316e3e751c6\" (UID: \"95840382-8211-4637-92f6-8316e3e751c6\") " Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.110037 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95840382-8211-4637-92f6-8316e3e751c6-kube-api-access-26qt6" (OuterVolumeSpecName: "kube-api-access-26qt6") pod "95840382-8211-4637-92f6-8316e3e751c6" (UID: "95840382-8211-4637-92f6-8316e3e751c6"). InnerVolumeSpecName "kube-api-access-26qt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.110336 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "95840382-8211-4637-92f6-8316e3e751c6" (UID: "95840382-8211-4637-92f6-8316e3e751c6"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.131944 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "95840382-8211-4637-92f6-8316e3e751c6" (UID: "95840382-8211-4637-92f6-8316e3e751c6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.144104 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-inventory" (OuterVolumeSpecName: "inventory") pod "95840382-8211-4637-92f6-8316e3e751c6" (UID: "95840382-8211-4637-92f6-8316e3e751c6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.155548 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "95840382-8211-4637-92f6-8316e3e751c6" (UID: "95840382-8211-4637-92f6-8316e3e751c6"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.162004 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "95840382-8211-4637-92f6-8316e3e751c6" (UID: "95840382-8211-4637-92f6-8316e3e751c6"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.190304 4624 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.190349 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.190365 4624 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.190377 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.190393 4624 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/95840382-8211-4637-92f6-8316e3e751c6-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.190405 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26qt6\" (UniqueName: \"kubernetes.io/projected/95840382-8211-4637-92f6-8316e3e751c6-kube-api-access-26qt6\") on node \"crc\" DevicePath \"\"" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.343856 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" event={"ID":"95840382-8211-4637-92f6-8316e3e751c6","Type":"ContainerDied","Data":"54bae758b9e51721e0b8e866cc40b7e95e1fdaf5a1ea38149b955e855d7760b2"} Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.343913 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54bae758b9e51721e0b8e866cc40b7e95e1fdaf5a1ea38149b955e855d7760b2" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.343997 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.437221 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v"] Oct 08 15:01:27 crc kubenswrapper[4624]: E1008 15:01:27.438031 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f247107e-e524-46ed-891c-4ceae5377acd" containerName="keystone-cron" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.438130 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f247107e-e524-46ed-891c-4ceae5377acd" containerName="keystone-cron" Oct 08 15:01:27 crc kubenswrapper[4624]: E1008 15:01:27.438215 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95840382-8211-4637-92f6-8316e3e751c6" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.438315 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="95840382-8211-4637-92f6-8316e3e751c6" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.438628 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f247107e-e524-46ed-891c-4ceae5377acd" containerName="keystone-cron" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.438751 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="95840382-8211-4637-92f6-8316e3e751c6" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.439496 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.441961 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.442558 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.443073 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.443765 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.446960 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v6sxt" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.459661 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v"] Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.496590 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47kjg\" (UniqueName: \"kubernetes.io/projected/43842ce6-3b52-41bc-ab12-56e722de00d1-kube-api-access-47kjg\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.497078 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.497119 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.497174 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.497316 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.598706 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.598787 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.599588 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.599823 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.600079 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47kjg\" (UniqueName: \"kubernetes.io/projected/43842ce6-3b52-41bc-ab12-56e722de00d1-kube-api-access-47kjg\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.603267 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.603269 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.603979 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.604220 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.620804 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47kjg\" (UniqueName: \"kubernetes.io/projected/43842ce6-3b52-41bc-ab12-56e722de00d1-kube-api-access-47kjg\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:01:27 crc kubenswrapper[4624]: I1008 15:01:27.759318 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:01:28 crc kubenswrapper[4624]: I1008 15:01:28.365887 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v"] Oct 08 15:01:29 crc kubenswrapper[4624]: I1008 15:01:29.388617 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" event={"ID":"43842ce6-3b52-41bc-ab12-56e722de00d1","Type":"ContainerStarted","Data":"42f1ae9c9bf35c16943e9b15b641f3fa819e7df2f668e2ae037a1b9557547157"} Oct 08 15:01:30 crc kubenswrapper[4624]: I1008 15:01:30.407991 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" event={"ID":"43842ce6-3b52-41bc-ab12-56e722de00d1","Type":"ContainerStarted","Data":"be40486eeb28d12deef6b2be6509f8174d85123853815da232e98a69ca126c37"} Oct 08 15:01:30 crc kubenswrapper[4624]: I1008 15:01:30.444346 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" podStartSLOduration=2.7710073 podStartE2EDuration="3.444326657s" podCreationTimestamp="2025-10-08 15:01:27 +0000 UTC" firstStartedPulling="2025-10-08 15:01:28.370990864 +0000 UTC m=+2313.521925941" lastFinishedPulling="2025-10-08 15:01:29.044310221 +0000 UTC m=+2314.195245298" observedRunningTime="2025-10-08 15:01:30.434996152 +0000 UTC m=+2315.585931239" watchObservedRunningTime="2025-10-08 15:01:30.444326657 +0000 UTC m=+2315.595261734" Oct 08 15:01:35 crc kubenswrapper[4624]: I1008 15:01:35.474725 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:01:35 crc kubenswrapper[4624]: E1008 15:01:35.476113 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:01:49 crc kubenswrapper[4624]: I1008 15:01:49.465419 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:01:49 crc kubenswrapper[4624]: E1008 15:01:49.466293 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:01:58 crc kubenswrapper[4624]: I1008 15:01:58.686485 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6j28k"] Oct 08 15:01:58 crc kubenswrapper[4624]: I1008 15:01:58.690223 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6j28k" Oct 08 15:01:58 crc kubenswrapper[4624]: I1008 15:01:58.731395 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6j28k"] Oct 08 15:01:58 crc kubenswrapper[4624]: I1008 15:01:58.856611 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6pzr\" (UniqueName: \"kubernetes.io/projected/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec-kube-api-access-n6pzr\") pod \"certified-operators-6j28k\" (UID: \"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec\") " pod="openshift-marketplace/certified-operators-6j28k" Oct 08 15:01:58 crc kubenswrapper[4624]: I1008 15:01:58.857080 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec-utilities\") pod \"certified-operators-6j28k\" (UID: \"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec\") " pod="openshift-marketplace/certified-operators-6j28k" Oct 08 15:01:58 crc kubenswrapper[4624]: I1008 15:01:58.857289 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec-catalog-content\") pod \"certified-operators-6j28k\" (UID: \"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec\") " pod="openshift-marketplace/certified-operators-6j28k" Oct 08 15:01:58 crc kubenswrapper[4624]: I1008 15:01:58.959705 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6pzr\" (UniqueName: \"kubernetes.io/projected/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec-kube-api-access-n6pzr\") pod \"certified-operators-6j28k\" (UID: \"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec\") " pod="openshift-marketplace/certified-operators-6j28k" Oct 08 15:01:58 crc kubenswrapper[4624]: I1008 15:01:58.959787 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec-utilities\") pod \"certified-operators-6j28k\" (UID: \"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec\") " pod="openshift-marketplace/certified-operators-6j28k" Oct 08 15:01:58 crc kubenswrapper[4624]: I1008 15:01:58.959862 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec-catalog-content\") pod \"certified-operators-6j28k\" (UID: \"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec\") " pod="openshift-marketplace/certified-operators-6j28k" Oct 08 15:01:58 crc kubenswrapper[4624]: I1008 15:01:58.960490 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec-utilities\") pod \"certified-operators-6j28k\" (UID: \"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec\") " pod="openshift-marketplace/certified-operators-6j28k" Oct 08 15:01:58 crc kubenswrapper[4624]: I1008 15:01:58.960499 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec-catalog-content\") pod \"certified-operators-6j28k\" (UID: \"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec\") " pod="openshift-marketplace/certified-operators-6j28k" Oct 08 15:01:58 crc kubenswrapper[4624]: I1008 15:01:58.998114 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6pzr\" (UniqueName: \"kubernetes.io/projected/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec-kube-api-access-n6pzr\") pod \"certified-operators-6j28k\" (UID: \"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec\") " pod="openshift-marketplace/certified-operators-6j28k" Oct 08 15:01:59 crc kubenswrapper[4624]: I1008 15:01:59.019389 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6j28k" Oct 08 15:01:59 crc kubenswrapper[4624]: I1008 15:01:59.633041 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6j28k"] Oct 08 15:01:59 crc kubenswrapper[4624]: I1008 15:01:59.697595 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6j28k" event={"ID":"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec","Type":"ContainerStarted","Data":"c10e446d174ff7f2fefceab474dae427bb0ef24c681742cb63474996edc9c576"} Oct 08 15:02:00 crc kubenswrapper[4624]: I1008 15:02:00.719808 4624 generic.go:334] "Generic (PLEG): container finished" podID="855fbfc1-9a15-41b5-84a1-ef19bf0a83ec" containerID="f129c52f10b4c77fa2a0d3fd38fe33cf12f28b8d716614a30e10fda5a1c86c03" exitCode=0 Oct 08 15:02:00 crc kubenswrapper[4624]: I1008 15:02:00.719913 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6j28k" event={"ID":"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec","Type":"ContainerDied","Data":"f129c52f10b4c77fa2a0d3fd38fe33cf12f28b8d716614a30e10fda5a1c86c03"} Oct 08 15:02:01 crc kubenswrapper[4624]: I1008 15:02:01.730544 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6j28k" event={"ID":"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec","Type":"ContainerStarted","Data":"030b9907faa3fe425860897d6400b2bee2e687e1e56de3ecce39734d7b3cae15"} Oct 08 15:02:03 crc kubenswrapper[4624]: I1008 15:02:03.753497 4624 generic.go:334] "Generic (PLEG): container finished" podID="855fbfc1-9a15-41b5-84a1-ef19bf0a83ec" containerID="030b9907faa3fe425860897d6400b2bee2e687e1e56de3ecce39734d7b3cae15" exitCode=0 Oct 08 15:02:03 crc kubenswrapper[4624]: I1008 15:02:03.753788 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6j28k" event={"ID":"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec","Type":"ContainerDied","Data":"030b9907faa3fe425860897d6400b2bee2e687e1e56de3ecce39734d7b3cae15"} Oct 08 15:02:03 crc kubenswrapper[4624]: I1008 15:02:03.758184 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 15:02:04 crc kubenswrapper[4624]: I1008 15:02:04.466798 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:02:04 crc kubenswrapper[4624]: E1008 15:02:04.467946 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:02:04 crc kubenswrapper[4624]: I1008 15:02:04.764790 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6j28k" event={"ID":"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec","Type":"ContainerStarted","Data":"01e092c91f712ada8774fa79170fe2110be441cc13067e14cea0998dc7867269"} Oct 08 15:02:04 crc kubenswrapper[4624]: I1008 15:02:04.789805 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6j28k" podStartSLOduration=3.125332122 podStartE2EDuration="6.789782752s" podCreationTimestamp="2025-10-08 15:01:58 +0000 UTC" firstStartedPulling="2025-10-08 15:02:00.722251578 +0000 UTC m=+2345.873186655" lastFinishedPulling="2025-10-08 15:02:04.386702208 +0000 UTC m=+2349.537637285" observedRunningTime="2025-10-08 15:02:04.781306579 +0000 UTC m=+2349.932241656" watchObservedRunningTime="2025-10-08 15:02:04.789782752 +0000 UTC m=+2349.940717829" Oct 08 15:02:09 crc kubenswrapper[4624]: I1008 15:02:09.020406 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6j28k" Oct 08 15:02:09 crc kubenswrapper[4624]: I1008 15:02:09.021077 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6j28k" Oct 08 15:02:09 crc kubenswrapper[4624]: I1008 15:02:09.077854 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6j28k" Oct 08 15:02:09 crc kubenswrapper[4624]: I1008 15:02:09.868163 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6j28k" Oct 08 15:02:09 crc kubenswrapper[4624]: I1008 15:02:09.920724 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6j28k"] Oct 08 15:02:11 crc kubenswrapper[4624]: I1008 15:02:11.843258 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6j28k" podUID="855fbfc1-9a15-41b5-84a1-ef19bf0a83ec" containerName="registry-server" containerID="cri-o://01e092c91f712ada8774fa79170fe2110be441cc13067e14cea0998dc7867269" gracePeriod=2 Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.316513 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6j28k" Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.417325 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec-utilities\") pod \"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec\" (UID: \"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec\") " Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.417395 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec-catalog-content\") pod \"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec\" (UID: \"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec\") " Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.417744 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6pzr\" (UniqueName: \"kubernetes.io/projected/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec-kube-api-access-n6pzr\") pod \"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec\" (UID: \"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec\") " Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.420587 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec-utilities" (OuterVolumeSpecName: "utilities") pod "855fbfc1-9a15-41b5-84a1-ef19bf0a83ec" (UID: "855fbfc1-9a15-41b5-84a1-ef19bf0a83ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.423241 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec-kube-api-access-n6pzr" (OuterVolumeSpecName: "kube-api-access-n6pzr") pod "855fbfc1-9a15-41b5-84a1-ef19bf0a83ec" (UID: "855fbfc1-9a15-41b5-84a1-ef19bf0a83ec"). InnerVolumeSpecName "kube-api-access-n6pzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.467180 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "855fbfc1-9a15-41b5-84a1-ef19bf0a83ec" (UID: "855fbfc1-9a15-41b5-84a1-ef19bf0a83ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.520092 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.520130 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6pzr\" (UniqueName: \"kubernetes.io/projected/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec-kube-api-access-n6pzr\") on node \"crc\" DevicePath \"\"" Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.520146 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.855762 4624 generic.go:334] "Generic (PLEG): container finished" podID="855fbfc1-9a15-41b5-84a1-ef19bf0a83ec" containerID="01e092c91f712ada8774fa79170fe2110be441cc13067e14cea0998dc7867269" exitCode=0 Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.855813 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6j28k" event={"ID":"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec","Type":"ContainerDied","Data":"01e092c91f712ada8774fa79170fe2110be441cc13067e14cea0998dc7867269"} Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.855845 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6j28k" event={"ID":"855fbfc1-9a15-41b5-84a1-ef19bf0a83ec","Type":"ContainerDied","Data":"c10e446d174ff7f2fefceab474dae427bb0ef24c681742cb63474996edc9c576"} Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.855858 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6j28k" Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.855867 4624 scope.go:117] "RemoveContainer" containerID="01e092c91f712ada8774fa79170fe2110be441cc13067e14cea0998dc7867269" Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.886773 4624 scope.go:117] "RemoveContainer" containerID="030b9907faa3fe425860897d6400b2bee2e687e1e56de3ecce39734d7b3cae15" Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.905423 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6j28k"] Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.919886 4624 scope.go:117] "RemoveContainer" containerID="f129c52f10b4c77fa2a0d3fd38fe33cf12f28b8d716614a30e10fda5a1c86c03" Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.920491 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6j28k"] Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.966108 4624 scope.go:117] "RemoveContainer" containerID="01e092c91f712ada8774fa79170fe2110be441cc13067e14cea0998dc7867269" Oct 08 15:02:12 crc kubenswrapper[4624]: E1008 15:02:12.966731 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01e092c91f712ada8774fa79170fe2110be441cc13067e14cea0998dc7867269\": container with ID starting with 01e092c91f712ada8774fa79170fe2110be441cc13067e14cea0998dc7867269 not found: ID does not exist" containerID="01e092c91f712ada8774fa79170fe2110be441cc13067e14cea0998dc7867269" Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.966789 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01e092c91f712ada8774fa79170fe2110be441cc13067e14cea0998dc7867269"} err="failed to get container status \"01e092c91f712ada8774fa79170fe2110be441cc13067e14cea0998dc7867269\": rpc error: code = NotFound desc = could not find container \"01e092c91f712ada8774fa79170fe2110be441cc13067e14cea0998dc7867269\": container with ID starting with 01e092c91f712ada8774fa79170fe2110be441cc13067e14cea0998dc7867269 not found: ID does not exist" Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.966833 4624 scope.go:117] "RemoveContainer" containerID="030b9907faa3fe425860897d6400b2bee2e687e1e56de3ecce39734d7b3cae15" Oct 08 15:02:12 crc kubenswrapper[4624]: E1008 15:02:12.967195 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"030b9907faa3fe425860897d6400b2bee2e687e1e56de3ecce39734d7b3cae15\": container with ID starting with 030b9907faa3fe425860897d6400b2bee2e687e1e56de3ecce39734d7b3cae15 not found: ID does not exist" containerID="030b9907faa3fe425860897d6400b2bee2e687e1e56de3ecce39734d7b3cae15" Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.967221 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"030b9907faa3fe425860897d6400b2bee2e687e1e56de3ecce39734d7b3cae15"} err="failed to get container status \"030b9907faa3fe425860897d6400b2bee2e687e1e56de3ecce39734d7b3cae15\": rpc error: code = NotFound desc = could not find container \"030b9907faa3fe425860897d6400b2bee2e687e1e56de3ecce39734d7b3cae15\": container with ID starting with 030b9907faa3fe425860897d6400b2bee2e687e1e56de3ecce39734d7b3cae15 not found: ID does not exist" Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.967242 4624 scope.go:117] "RemoveContainer" containerID="f129c52f10b4c77fa2a0d3fd38fe33cf12f28b8d716614a30e10fda5a1c86c03" Oct 08 15:02:12 crc kubenswrapper[4624]: E1008 15:02:12.967665 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f129c52f10b4c77fa2a0d3fd38fe33cf12f28b8d716614a30e10fda5a1c86c03\": container with ID starting with f129c52f10b4c77fa2a0d3fd38fe33cf12f28b8d716614a30e10fda5a1c86c03 not found: ID does not exist" containerID="f129c52f10b4c77fa2a0d3fd38fe33cf12f28b8d716614a30e10fda5a1c86c03" Oct 08 15:02:12 crc kubenswrapper[4624]: I1008 15:02:12.967716 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f129c52f10b4c77fa2a0d3fd38fe33cf12f28b8d716614a30e10fda5a1c86c03"} err="failed to get container status \"f129c52f10b4c77fa2a0d3fd38fe33cf12f28b8d716614a30e10fda5a1c86c03\": rpc error: code = NotFound desc = could not find container \"f129c52f10b4c77fa2a0d3fd38fe33cf12f28b8d716614a30e10fda5a1c86c03\": container with ID starting with f129c52f10b4c77fa2a0d3fd38fe33cf12f28b8d716614a30e10fda5a1c86c03 not found: ID does not exist" Oct 08 15:02:13 crc kubenswrapper[4624]: I1008 15:02:13.477604 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="855fbfc1-9a15-41b5-84a1-ef19bf0a83ec" path="/var/lib/kubelet/pods/855fbfc1-9a15-41b5-84a1-ef19bf0a83ec/volumes" Oct 08 15:02:18 crc kubenswrapper[4624]: I1008 15:02:18.465317 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:02:18 crc kubenswrapper[4624]: E1008 15:02:18.466009 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:02:29 crc kubenswrapper[4624]: I1008 15:02:29.466340 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:02:29 crc kubenswrapper[4624]: E1008 15:02:29.467196 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:02:40 crc kubenswrapper[4624]: I1008 15:02:40.466599 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:02:40 crc kubenswrapper[4624]: E1008 15:02:40.467537 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:02:52 crc kubenswrapper[4624]: I1008 15:02:52.466763 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:02:52 crc kubenswrapper[4624]: E1008 15:02:52.467819 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:03:03 crc kubenswrapper[4624]: I1008 15:03:03.465905 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:03:03 crc kubenswrapper[4624]: E1008 15:03:03.466796 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:03:18 crc kubenswrapper[4624]: I1008 15:03:18.465916 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:03:18 crc kubenswrapper[4624]: E1008 15:03:18.466853 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:03:31 crc kubenswrapper[4624]: I1008 15:03:31.466064 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:03:31 crc kubenswrapper[4624]: E1008 15:03:31.467079 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:03:45 crc kubenswrapper[4624]: I1008 15:03:45.477310 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:03:45 crc kubenswrapper[4624]: E1008 15:03:45.478267 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:03:56 crc kubenswrapper[4624]: I1008 15:03:56.466649 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:03:56 crc kubenswrapper[4624]: E1008 15:03:56.467414 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:04:00 crc kubenswrapper[4624]: I1008 15:04:00.661925 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b94zp"] Oct 08 15:04:00 crc kubenswrapper[4624]: E1008 15:04:00.662885 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855fbfc1-9a15-41b5-84a1-ef19bf0a83ec" containerName="extract-content" Oct 08 15:04:00 crc kubenswrapper[4624]: I1008 15:04:00.662903 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="855fbfc1-9a15-41b5-84a1-ef19bf0a83ec" containerName="extract-content" Oct 08 15:04:00 crc kubenswrapper[4624]: E1008 15:04:00.662952 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855fbfc1-9a15-41b5-84a1-ef19bf0a83ec" containerName="extract-utilities" Oct 08 15:04:00 crc kubenswrapper[4624]: I1008 15:04:00.662960 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="855fbfc1-9a15-41b5-84a1-ef19bf0a83ec" containerName="extract-utilities" Oct 08 15:04:00 crc kubenswrapper[4624]: E1008 15:04:00.662975 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855fbfc1-9a15-41b5-84a1-ef19bf0a83ec" containerName="registry-server" Oct 08 15:04:00 crc kubenswrapper[4624]: I1008 15:04:00.662983 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="855fbfc1-9a15-41b5-84a1-ef19bf0a83ec" containerName="registry-server" Oct 08 15:04:00 crc kubenswrapper[4624]: I1008 15:04:00.663267 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="855fbfc1-9a15-41b5-84a1-ef19bf0a83ec" containerName="registry-server" Oct 08 15:04:00 crc kubenswrapper[4624]: I1008 15:04:00.665147 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b94zp" Oct 08 15:04:00 crc kubenswrapper[4624]: I1008 15:04:00.689063 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b94zp"] Oct 08 15:04:00 crc kubenswrapper[4624]: I1008 15:04:00.770343 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgn89\" (UniqueName: \"kubernetes.io/projected/9cc22bbd-218b-4ff9-86ab-e134a12ecece-kube-api-access-vgn89\") pod \"redhat-marketplace-b94zp\" (UID: \"9cc22bbd-218b-4ff9-86ab-e134a12ecece\") " pod="openshift-marketplace/redhat-marketplace-b94zp" Oct 08 15:04:00 crc kubenswrapper[4624]: I1008 15:04:00.770486 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cc22bbd-218b-4ff9-86ab-e134a12ecece-utilities\") pod \"redhat-marketplace-b94zp\" (UID: \"9cc22bbd-218b-4ff9-86ab-e134a12ecece\") " pod="openshift-marketplace/redhat-marketplace-b94zp" Oct 08 15:04:00 crc kubenswrapper[4624]: I1008 15:04:00.770692 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cc22bbd-218b-4ff9-86ab-e134a12ecece-catalog-content\") pod \"redhat-marketplace-b94zp\" (UID: \"9cc22bbd-218b-4ff9-86ab-e134a12ecece\") " pod="openshift-marketplace/redhat-marketplace-b94zp" Oct 08 15:04:00 crc kubenswrapper[4624]: I1008 15:04:00.872277 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cc22bbd-218b-4ff9-86ab-e134a12ecece-catalog-content\") pod \"redhat-marketplace-b94zp\" (UID: \"9cc22bbd-218b-4ff9-86ab-e134a12ecece\") " pod="openshift-marketplace/redhat-marketplace-b94zp" Oct 08 15:04:00 crc kubenswrapper[4624]: I1008 15:04:00.872357 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgn89\" (UniqueName: \"kubernetes.io/projected/9cc22bbd-218b-4ff9-86ab-e134a12ecece-kube-api-access-vgn89\") pod \"redhat-marketplace-b94zp\" (UID: \"9cc22bbd-218b-4ff9-86ab-e134a12ecece\") " pod="openshift-marketplace/redhat-marketplace-b94zp" Oct 08 15:04:00 crc kubenswrapper[4624]: I1008 15:04:00.872409 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cc22bbd-218b-4ff9-86ab-e134a12ecece-utilities\") pod \"redhat-marketplace-b94zp\" (UID: \"9cc22bbd-218b-4ff9-86ab-e134a12ecece\") " pod="openshift-marketplace/redhat-marketplace-b94zp" Oct 08 15:04:00 crc kubenswrapper[4624]: I1008 15:04:00.872847 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cc22bbd-218b-4ff9-86ab-e134a12ecece-catalog-content\") pod \"redhat-marketplace-b94zp\" (UID: \"9cc22bbd-218b-4ff9-86ab-e134a12ecece\") " pod="openshift-marketplace/redhat-marketplace-b94zp" Oct 08 15:04:00 crc kubenswrapper[4624]: I1008 15:04:00.872876 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cc22bbd-218b-4ff9-86ab-e134a12ecece-utilities\") pod \"redhat-marketplace-b94zp\" (UID: \"9cc22bbd-218b-4ff9-86ab-e134a12ecece\") " pod="openshift-marketplace/redhat-marketplace-b94zp" Oct 08 15:04:00 crc kubenswrapper[4624]: I1008 15:04:00.911984 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgn89\" (UniqueName: \"kubernetes.io/projected/9cc22bbd-218b-4ff9-86ab-e134a12ecece-kube-api-access-vgn89\") pod \"redhat-marketplace-b94zp\" (UID: \"9cc22bbd-218b-4ff9-86ab-e134a12ecece\") " pod="openshift-marketplace/redhat-marketplace-b94zp" Oct 08 15:04:00 crc kubenswrapper[4624]: I1008 15:04:00.993744 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b94zp" Oct 08 15:04:01 crc kubenswrapper[4624]: I1008 15:04:01.499014 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b94zp"] Oct 08 15:04:01 crc kubenswrapper[4624]: I1008 15:04:01.831728 4624 generic.go:334] "Generic (PLEG): container finished" podID="9cc22bbd-218b-4ff9-86ab-e134a12ecece" containerID="03e24d9506af5354032720e369951097ad43b2a843a83717e639f88d3f768bdf" exitCode=0 Oct 08 15:04:01 crc kubenswrapper[4624]: I1008 15:04:01.831834 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b94zp" event={"ID":"9cc22bbd-218b-4ff9-86ab-e134a12ecece","Type":"ContainerDied","Data":"03e24d9506af5354032720e369951097ad43b2a843a83717e639f88d3f768bdf"} Oct 08 15:04:01 crc kubenswrapper[4624]: I1008 15:04:01.832074 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b94zp" event={"ID":"9cc22bbd-218b-4ff9-86ab-e134a12ecece","Type":"ContainerStarted","Data":"7dbcec5b5f9bfc841b1f253a109112fcbfe3c665c60671d76870f4c45da61f66"} Oct 08 15:04:03 crc kubenswrapper[4624]: I1008 15:04:03.852461 4624 generic.go:334] "Generic (PLEG): container finished" podID="9cc22bbd-218b-4ff9-86ab-e134a12ecece" containerID="44ddbdefe28ed640e0ccfb213af75438689a7da1a3fb94fb0b292253a01ec76e" exitCode=0 Oct 08 15:04:03 crc kubenswrapper[4624]: I1008 15:04:03.852557 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b94zp" event={"ID":"9cc22bbd-218b-4ff9-86ab-e134a12ecece","Type":"ContainerDied","Data":"44ddbdefe28ed640e0ccfb213af75438689a7da1a3fb94fb0b292253a01ec76e"} Oct 08 15:04:04 crc kubenswrapper[4624]: I1008 15:04:04.864344 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b94zp" event={"ID":"9cc22bbd-218b-4ff9-86ab-e134a12ecece","Type":"ContainerStarted","Data":"fc130487ee902eec7539de2e0772caf6d85f6edeb524b1c9b10b3974ce2ff5f3"} Oct 08 15:04:04 crc kubenswrapper[4624]: I1008 15:04:04.885005 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b94zp" podStartSLOduration=2.456276205 podStartE2EDuration="4.884985535s" podCreationTimestamp="2025-10-08 15:04:00 +0000 UTC" firstStartedPulling="2025-10-08 15:04:01.833568981 +0000 UTC m=+2466.984504058" lastFinishedPulling="2025-10-08 15:04:04.262278311 +0000 UTC m=+2469.413213388" observedRunningTime="2025-10-08 15:04:04.88184468 +0000 UTC m=+2470.032779757" watchObservedRunningTime="2025-10-08 15:04:04.884985535 +0000 UTC m=+2470.035920612" Oct 08 15:04:10 crc kubenswrapper[4624]: I1008 15:04:10.466614 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:04:10 crc kubenswrapper[4624]: E1008 15:04:10.467322 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:04:10 crc kubenswrapper[4624]: I1008 15:04:10.995366 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b94zp" Oct 08 15:04:10 crc kubenswrapper[4624]: I1008 15:04:10.995706 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b94zp" Oct 08 15:04:11 crc kubenswrapper[4624]: I1008 15:04:11.042131 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b94zp" Oct 08 15:04:11 crc kubenswrapper[4624]: I1008 15:04:11.968801 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b94zp" Oct 08 15:04:12 crc kubenswrapper[4624]: I1008 15:04:12.248012 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b94zp"] Oct 08 15:04:13 crc kubenswrapper[4624]: I1008 15:04:13.937035 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b94zp" podUID="9cc22bbd-218b-4ff9-86ab-e134a12ecece" containerName="registry-server" containerID="cri-o://fc130487ee902eec7539de2e0772caf6d85f6edeb524b1c9b10b3974ce2ff5f3" gracePeriod=2 Oct 08 15:04:14 crc kubenswrapper[4624]: I1008 15:04:14.376786 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b94zp" Oct 08 15:04:14 crc kubenswrapper[4624]: I1008 15:04:14.536271 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cc22bbd-218b-4ff9-86ab-e134a12ecece-utilities\") pod \"9cc22bbd-218b-4ff9-86ab-e134a12ecece\" (UID: \"9cc22bbd-218b-4ff9-86ab-e134a12ecece\") " Oct 08 15:04:14 crc kubenswrapper[4624]: I1008 15:04:14.536381 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgn89\" (UniqueName: \"kubernetes.io/projected/9cc22bbd-218b-4ff9-86ab-e134a12ecece-kube-api-access-vgn89\") pod \"9cc22bbd-218b-4ff9-86ab-e134a12ecece\" (UID: \"9cc22bbd-218b-4ff9-86ab-e134a12ecece\") " Oct 08 15:04:14 crc kubenswrapper[4624]: I1008 15:04:14.536516 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cc22bbd-218b-4ff9-86ab-e134a12ecece-catalog-content\") pod \"9cc22bbd-218b-4ff9-86ab-e134a12ecece\" (UID: \"9cc22bbd-218b-4ff9-86ab-e134a12ecece\") " Oct 08 15:04:14 crc kubenswrapper[4624]: I1008 15:04:14.537401 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cc22bbd-218b-4ff9-86ab-e134a12ecece-utilities" (OuterVolumeSpecName: "utilities") pod "9cc22bbd-218b-4ff9-86ab-e134a12ecece" (UID: "9cc22bbd-218b-4ff9-86ab-e134a12ecece"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:04:14 crc kubenswrapper[4624]: I1008 15:04:14.544340 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cc22bbd-218b-4ff9-86ab-e134a12ecece-kube-api-access-vgn89" (OuterVolumeSpecName: "kube-api-access-vgn89") pod "9cc22bbd-218b-4ff9-86ab-e134a12ecece" (UID: "9cc22bbd-218b-4ff9-86ab-e134a12ecece"). InnerVolumeSpecName "kube-api-access-vgn89". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:04:14 crc kubenswrapper[4624]: I1008 15:04:14.550939 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cc22bbd-218b-4ff9-86ab-e134a12ecece-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cc22bbd-218b-4ff9-86ab-e134a12ecece" (UID: "9cc22bbd-218b-4ff9-86ab-e134a12ecece"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:04:14 crc kubenswrapper[4624]: I1008 15:04:14.639978 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cc22bbd-218b-4ff9-86ab-e134a12ecece-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:04:14 crc kubenswrapper[4624]: I1008 15:04:14.640230 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cc22bbd-218b-4ff9-86ab-e134a12ecece-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:04:14 crc kubenswrapper[4624]: I1008 15:04:14.640378 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgn89\" (UniqueName: \"kubernetes.io/projected/9cc22bbd-218b-4ff9-86ab-e134a12ecece-kube-api-access-vgn89\") on node \"crc\" DevicePath \"\"" Oct 08 15:04:14 crc kubenswrapper[4624]: I1008 15:04:14.955739 4624 generic.go:334] "Generic (PLEG): container finished" podID="9cc22bbd-218b-4ff9-86ab-e134a12ecece" containerID="fc130487ee902eec7539de2e0772caf6d85f6edeb524b1c9b10b3974ce2ff5f3" exitCode=0 Oct 08 15:04:14 crc kubenswrapper[4624]: I1008 15:04:14.955784 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b94zp" event={"ID":"9cc22bbd-218b-4ff9-86ab-e134a12ecece","Type":"ContainerDied","Data":"fc130487ee902eec7539de2e0772caf6d85f6edeb524b1c9b10b3974ce2ff5f3"} Oct 08 15:04:14 crc kubenswrapper[4624]: I1008 15:04:14.955804 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b94zp" Oct 08 15:04:14 crc kubenswrapper[4624]: I1008 15:04:14.955825 4624 scope.go:117] "RemoveContainer" containerID="fc130487ee902eec7539de2e0772caf6d85f6edeb524b1c9b10b3974ce2ff5f3" Oct 08 15:04:14 crc kubenswrapper[4624]: I1008 15:04:14.955812 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b94zp" event={"ID":"9cc22bbd-218b-4ff9-86ab-e134a12ecece","Type":"ContainerDied","Data":"7dbcec5b5f9bfc841b1f253a109112fcbfe3c665c60671d76870f4c45da61f66"} Oct 08 15:04:14 crc kubenswrapper[4624]: I1008 15:04:14.981004 4624 scope.go:117] "RemoveContainer" containerID="44ddbdefe28ed640e0ccfb213af75438689a7da1a3fb94fb0b292253a01ec76e" Oct 08 15:04:14 crc kubenswrapper[4624]: I1008 15:04:14.991050 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b94zp"] Oct 08 15:04:15 crc kubenswrapper[4624]: I1008 15:04:15.004481 4624 scope.go:117] "RemoveContainer" containerID="03e24d9506af5354032720e369951097ad43b2a843a83717e639f88d3f768bdf" Oct 08 15:04:15 crc kubenswrapper[4624]: I1008 15:04:15.005259 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b94zp"] Oct 08 15:04:15 crc kubenswrapper[4624]: I1008 15:04:15.051303 4624 scope.go:117] "RemoveContainer" containerID="fc130487ee902eec7539de2e0772caf6d85f6edeb524b1c9b10b3974ce2ff5f3" Oct 08 15:04:15 crc kubenswrapper[4624]: E1008 15:04:15.052525 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc130487ee902eec7539de2e0772caf6d85f6edeb524b1c9b10b3974ce2ff5f3\": container with ID starting with fc130487ee902eec7539de2e0772caf6d85f6edeb524b1c9b10b3974ce2ff5f3 not found: ID does not exist" containerID="fc130487ee902eec7539de2e0772caf6d85f6edeb524b1c9b10b3974ce2ff5f3" Oct 08 15:04:15 crc kubenswrapper[4624]: I1008 15:04:15.052570 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc130487ee902eec7539de2e0772caf6d85f6edeb524b1c9b10b3974ce2ff5f3"} err="failed to get container status \"fc130487ee902eec7539de2e0772caf6d85f6edeb524b1c9b10b3974ce2ff5f3\": rpc error: code = NotFound desc = could not find container \"fc130487ee902eec7539de2e0772caf6d85f6edeb524b1c9b10b3974ce2ff5f3\": container with ID starting with fc130487ee902eec7539de2e0772caf6d85f6edeb524b1c9b10b3974ce2ff5f3 not found: ID does not exist" Oct 08 15:04:15 crc kubenswrapper[4624]: I1008 15:04:15.052598 4624 scope.go:117] "RemoveContainer" containerID="44ddbdefe28ed640e0ccfb213af75438689a7da1a3fb94fb0b292253a01ec76e" Oct 08 15:04:15 crc kubenswrapper[4624]: E1008 15:04:15.053010 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44ddbdefe28ed640e0ccfb213af75438689a7da1a3fb94fb0b292253a01ec76e\": container with ID starting with 44ddbdefe28ed640e0ccfb213af75438689a7da1a3fb94fb0b292253a01ec76e not found: ID does not exist" containerID="44ddbdefe28ed640e0ccfb213af75438689a7da1a3fb94fb0b292253a01ec76e" Oct 08 15:04:15 crc kubenswrapper[4624]: I1008 15:04:15.053040 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44ddbdefe28ed640e0ccfb213af75438689a7da1a3fb94fb0b292253a01ec76e"} err="failed to get container status \"44ddbdefe28ed640e0ccfb213af75438689a7da1a3fb94fb0b292253a01ec76e\": rpc error: code = NotFound desc = could not find container \"44ddbdefe28ed640e0ccfb213af75438689a7da1a3fb94fb0b292253a01ec76e\": container with ID starting with 44ddbdefe28ed640e0ccfb213af75438689a7da1a3fb94fb0b292253a01ec76e not found: ID does not exist" Oct 08 15:04:15 crc kubenswrapper[4624]: I1008 15:04:15.053062 4624 scope.go:117] "RemoveContainer" containerID="03e24d9506af5354032720e369951097ad43b2a843a83717e639f88d3f768bdf" Oct 08 15:04:15 crc kubenswrapper[4624]: E1008 15:04:15.053331 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03e24d9506af5354032720e369951097ad43b2a843a83717e639f88d3f768bdf\": container with ID starting with 03e24d9506af5354032720e369951097ad43b2a843a83717e639f88d3f768bdf not found: ID does not exist" containerID="03e24d9506af5354032720e369951097ad43b2a843a83717e639f88d3f768bdf" Oct 08 15:04:15 crc kubenswrapper[4624]: I1008 15:04:15.053358 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e24d9506af5354032720e369951097ad43b2a843a83717e639f88d3f768bdf"} err="failed to get container status \"03e24d9506af5354032720e369951097ad43b2a843a83717e639f88d3f768bdf\": rpc error: code = NotFound desc = could not find container \"03e24d9506af5354032720e369951097ad43b2a843a83717e639f88d3f768bdf\": container with ID starting with 03e24d9506af5354032720e369951097ad43b2a843a83717e639f88d3f768bdf not found: ID does not exist" Oct 08 15:04:15 crc kubenswrapper[4624]: I1008 15:04:15.476085 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cc22bbd-218b-4ff9-86ab-e134a12ecece" path="/var/lib/kubelet/pods/9cc22bbd-218b-4ff9-86ab-e134a12ecece/volumes" Oct 08 15:04:25 crc kubenswrapper[4624]: I1008 15:04:25.472145 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:04:25 crc kubenswrapper[4624]: E1008 15:04:25.473061 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:04:39 crc kubenswrapper[4624]: I1008 15:04:39.466239 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:04:39 crc kubenswrapper[4624]: E1008 15:04:39.467036 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:04:50 crc kubenswrapper[4624]: I1008 15:04:50.466524 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:04:50 crc kubenswrapper[4624]: E1008 15:04:50.467334 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:05:01 crc kubenswrapper[4624]: I1008 15:05:01.465688 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:05:01 crc kubenswrapper[4624]: E1008 15:05:01.466694 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:05:15 crc kubenswrapper[4624]: I1008 15:05:15.473055 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:05:15 crc kubenswrapper[4624]: E1008 15:05:15.473811 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:05:28 crc kubenswrapper[4624]: I1008 15:05:28.465905 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:05:28 crc kubenswrapper[4624]: E1008 15:05:28.466693 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:05:42 crc kubenswrapper[4624]: I1008 15:05:42.465724 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:05:42 crc kubenswrapper[4624]: I1008 15:05:42.706827 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"e5d7823785dcb4ecdee0774c306488753aa8247a7bff5383e00bc64045d4d1d2"} Oct 08 15:05:53 crc kubenswrapper[4624]: I1008 15:05:53.802201 4624 generic.go:334] "Generic (PLEG): container finished" podID="43842ce6-3b52-41bc-ab12-56e722de00d1" containerID="be40486eeb28d12deef6b2be6509f8174d85123853815da232e98a69ca126c37" exitCode=0 Oct 08 15:05:53 crc kubenswrapper[4624]: I1008 15:05:53.802747 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" event={"ID":"43842ce6-3b52-41bc-ab12-56e722de00d1","Type":"ContainerDied","Data":"be40486eeb28d12deef6b2be6509f8174d85123853815da232e98a69ca126c37"} Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.209687 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.216980 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-ssh-key\") pod \"43842ce6-3b52-41bc-ab12-56e722de00d1\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.217088 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-libvirt-secret-0\") pod \"43842ce6-3b52-41bc-ab12-56e722de00d1\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.217364 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47kjg\" (UniqueName: \"kubernetes.io/projected/43842ce6-3b52-41bc-ab12-56e722de00d1-kube-api-access-47kjg\") pod \"43842ce6-3b52-41bc-ab12-56e722de00d1\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.217599 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-inventory\") pod \"43842ce6-3b52-41bc-ab12-56e722de00d1\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.217629 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-libvirt-combined-ca-bundle\") pod \"43842ce6-3b52-41bc-ab12-56e722de00d1\" (UID: \"43842ce6-3b52-41bc-ab12-56e722de00d1\") " Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.224478 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43842ce6-3b52-41bc-ab12-56e722de00d1-kube-api-access-47kjg" (OuterVolumeSpecName: "kube-api-access-47kjg") pod "43842ce6-3b52-41bc-ab12-56e722de00d1" (UID: "43842ce6-3b52-41bc-ab12-56e722de00d1"). InnerVolumeSpecName "kube-api-access-47kjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.234288 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "43842ce6-3b52-41bc-ab12-56e722de00d1" (UID: "43842ce6-3b52-41bc-ab12-56e722de00d1"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.261956 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "43842ce6-3b52-41bc-ab12-56e722de00d1" (UID: "43842ce6-3b52-41bc-ab12-56e722de00d1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.263782 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-inventory" (OuterVolumeSpecName: "inventory") pod "43842ce6-3b52-41bc-ab12-56e722de00d1" (UID: "43842ce6-3b52-41bc-ab12-56e722de00d1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.271344 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "43842ce6-3b52-41bc-ab12-56e722de00d1" (UID: "43842ce6-3b52-41bc-ab12-56e722de00d1"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.321189 4624 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.321230 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.321238 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.321247 4624 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/43842ce6-3b52-41bc-ab12-56e722de00d1-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.321255 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47kjg\" (UniqueName: \"kubernetes.io/projected/43842ce6-3b52-41bc-ab12-56e722de00d1-kube-api-access-47kjg\") on node \"crc\" DevicePath \"\"" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.822182 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" event={"ID":"43842ce6-3b52-41bc-ab12-56e722de00d1","Type":"ContainerDied","Data":"42f1ae9c9bf35c16943e9b15b641f3fa819e7df2f668e2ae037a1b9557547157"} Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.822225 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42f1ae9c9bf35c16943e9b15b641f3fa819e7df2f668e2ae037a1b9557547157" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.822239 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.997840 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v"] Oct 08 15:05:55 crc kubenswrapper[4624]: E1008 15:05:55.998314 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43842ce6-3b52-41bc-ab12-56e722de00d1" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.998337 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="43842ce6-3b52-41bc-ab12-56e722de00d1" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 08 15:05:55 crc kubenswrapper[4624]: E1008 15:05:55.998362 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cc22bbd-218b-4ff9-86ab-e134a12ecece" containerName="extract-utilities" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.998372 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cc22bbd-218b-4ff9-86ab-e134a12ecece" containerName="extract-utilities" Oct 08 15:05:55 crc kubenswrapper[4624]: E1008 15:05:55.998399 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cc22bbd-218b-4ff9-86ab-e134a12ecece" containerName="extract-content" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.998409 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cc22bbd-218b-4ff9-86ab-e134a12ecece" containerName="extract-content" Oct 08 15:05:55 crc kubenswrapper[4624]: E1008 15:05:55.998446 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cc22bbd-218b-4ff9-86ab-e134a12ecece" containerName="registry-server" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.998457 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cc22bbd-218b-4ff9-86ab-e134a12ecece" containerName="registry-server" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.998703 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cc22bbd-218b-4ff9-86ab-e134a12ecece" containerName="registry-server" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.998734 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="43842ce6-3b52-41bc-ab12-56e722de00d1" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 08 15:05:55 crc kubenswrapper[4624]: I1008 15:05:55.999470 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.002825 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.002852 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v6sxt" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.002905 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.003034 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.003086 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.003215 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.006157 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.017894 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v"] Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.041596 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.041984 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.042169 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.042358 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.042417 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.042708 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn7kz\" (UniqueName: \"kubernetes.io/projected/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-kube-api-access-zn7kz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.042843 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.043007 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.043188 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.148117 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.148178 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.148225 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.148253 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.148284 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.148304 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.148326 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn7kz\" (UniqueName: \"kubernetes.io/projected/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-kube-api-access-zn7kz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.148353 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.148418 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.150453 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.153193 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.153290 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.153291 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.153400 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.154259 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.154706 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.154886 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.167177 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn7kz\" (UniqueName: \"kubernetes.io/projected/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-kube-api-access-zn7kz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58j7v\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.320567 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:05:56 crc kubenswrapper[4624]: I1008 15:05:56.847061 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v"] Oct 08 15:05:56 crc kubenswrapper[4624]: W1008 15:05:56.855512 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef49709c_d8c5_4ce4_b83e_eb7b84edbd4b.slice/crio-dee3e046634272ecc957fd9d6c4047d12649a26c7ec376b6d9addc37a50e42d6 WatchSource:0}: Error finding container dee3e046634272ecc957fd9d6c4047d12649a26c7ec376b6d9addc37a50e42d6: Status 404 returned error can't find the container with id dee3e046634272ecc957fd9d6c4047d12649a26c7ec376b6d9addc37a50e42d6 Oct 08 15:05:57 crc kubenswrapper[4624]: I1008 15:05:57.844882 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" event={"ID":"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b","Type":"ContainerStarted","Data":"2d2b619ebbb803dd6c6e617ac78c34bab6935ec7d1dcabb9321f5daf50bdd66a"} Oct 08 15:05:57 crc kubenswrapper[4624]: I1008 15:05:57.846128 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" event={"ID":"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b","Type":"ContainerStarted","Data":"dee3e046634272ecc957fd9d6c4047d12649a26c7ec376b6d9addc37a50e42d6"} Oct 08 15:05:57 crc kubenswrapper[4624]: I1008 15:05:57.873495 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" podStartSLOduration=2.407811318 podStartE2EDuration="2.873477857s" podCreationTimestamp="2025-10-08 15:05:55 +0000 UTC" firstStartedPulling="2025-10-08 15:05:56.858043348 +0000 UTC m=+2582.008978415" lastFinishedPulling="2025-10-08 15:05:57.323709877 +0000 UTC m=+2582.474644954" observedRunningTime="2025-10-08 15:05:57.866759422 +0000 UTC m=+2583.017694519" watchObservedRunningTime="2025-10-08 15:05:57.873477857 +0000 UTC m=+2583.024412934" Oct 08 15:08:00 crc kubenswrapper[4624]: I1008 15:08:00.076624 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:08:00 crc kubenswrapper[4624]: I1008 15:08:00.077353 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:08:29 crc kubenswrapper[4624]: I1008 15:08:29.967521 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fz8br"] Oct 08 15:08:29 crc kubenswrapper[4624]: I1008 15:08:29.970217 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fz8br" Oct 08 15:08:29 crc kubenswrapper[4624]: I1008 15:08:29.985836 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fz8br"] Oct 08 15:08:30 crc kubenswrapper[4624]: I1008 15:08:30.072770 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g26gj\" (UniqueName: \"kubernetes.io/projected/013caaa2-5e72-421e-b1d7-764ca4cbfe93-kube-api-access-g26gj\") pod \"community-operators-fz8br\" (UID: \"013caaa2-5e72-421e-b1d7-764ca4cbfe93\") " pod="openshift-marketplace/community-operators-fz8br" Oct 08 15:08:30 crc kubenswrapper[4624]: I1008 15:08:30.073189 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013caaa2-5e72-421e-b1d7-764ca4cbfe93-catalog-content\") pod \"community-operators-fz8br\" (UID: \"013caaa2-5e72-421e-b1d7-764ca4cbfe93\") " pod="openshift-marketplace/community-operators-fz8br" Oct 08 15:08:30 crc kubenswrapper[4624]: I1008 15:08:30.073355 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013caaa2-5e72-421e-b1d7-764ca4cbfe93-utilities\") pod \"community-operators-fz8br\" (UID: \"013caaa2-5e72-421e-b1d7-764ca4cbfe93\") " pod="openshift-marketplace/community-operators-fz8br" Oct 08 15:08:30 crc kubenswrapper[4624]: I1008 15:08:30.076011 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:08:30 crc kubenswrapper[4624]: I1008 15:08:30.076186 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:08:30 crc kubenswrapper[4624]: I1008 15:08:30.174827 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013caaa2-5e72-421e-b1d7-764ca4cbfe93-catalog-content\") pod \"community-operators-fz8br\" (UID: \"013caaa2-5e72-421e-b1d7-764ca4cbfe93\") " pod="openshift-marketplace/community-operators-fz8br" Oct 08 15:08:30 crc kubenswrapper[4624]: I1008 15:08:30.175478 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013caaa2-5e72-421e-b1d7-764ca4cbfe93-utilities\") pod \"community-operators-fz8br\" (UID: \"013caaa2-5e72-421e-b1d7-764ca4cbfe93\") " pod="openshift-marketplace/community-operators-fz8br" Oct 08 15:08:30 crc kubenswrapper[4624]: I1008 15:08:30.175650 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013caaa2-5e72-421e-b1d7-764ca4cbfe93-catalog-content\") pod \"community-operators-fz8br\" (UID: \"013caaa2-5e72-421e-b1d7-764ca4cbfe93\") " pod="openshift-marketplace/community-operators-fz8br" Oct 08 15:08:30 crc kubenswrapper[4624]: I1008 15:08:30.175828 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g26gj\" (UniqueName: \"kubernetes.io/projected/013caaa2-5e72-421e-b1d7-764ca4cbfe93-kube-api-access-g26gj\") pod \"community-operators-fz8br\" (UID: \"013caaa2-5e72-421e-b1d7-764ca4cbfe93\") " pod="openshift-marketplace/community-operators-fz8br" Oct 08 15:08:30 crc kubenswrapper[4624]: I1008 15:08:30.175957 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013caaa2-5e72-421e-b1d7-764ca4cbfe93-utilities\") pod \"community-operators-fz8br\" (UID: \"013caaa2-5e72-421e-b1d7-764ca4cbfe93\") " pod="openshift-marketplace/community-operators-fz8br" Oct 08 15:08:30 crc kubenswrapper[4624]: I1008 15:08:30.208322 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g26gj\" (UniqueName: \"kubernetes.io/projected/013caaa2-5e72-421e-b1d7-764ca4cbfe93-kube-api-access-g26gj\") pod \"community-operators-fz8br\" (UID: \"013caaa2-5e72-421e-b1d7-764ca4cbfe93\") " pod="openshift-marketplace/community-operators-fz8br" Oct 08 15:08:30 crc kubenswrapper[4624]: I1008 15:08:30.290546 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fz8br" Oct 08 15:08:30 crc kubenswrapper[4624]: I1008 15:08:30.900693 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fz8br"] Oct 08 15:08:30 crc kubenswrapper[4624]: W1008 15:08:30.907869 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod013caaa2_5e72_421e_b1d7_764ca4cbfe93.slice/crio-fba2e41ff8bae3e22f5f168d6eb6c68f49f197b5a166236feb6d688c8cc78729 WatchSource:0}: Error finding container fba2e41ff8bae3e22f5f168d6eb6c68f49f197b5a166236feb6d688c8cc78729: Status 404 returned error can't find the container with id fba2e41ff8bae3e22f5f168d6eb6c68f49f197b5a166236feb6d688c8cc78729 Oct 08 15:08:31 crc kubenswrapper[4624]: I1008 15:08:31.153756 4624 generic.go:334] "Generic (PLEG): container finished" podID="013caaa2-5e72-421e-b1d7-764ca4cbfe93" containerID="d766b54e4d17ab35211c989d8b13aa74d2efddc969cbcfc503e0f869e6785729" exitCode=0 Oct 08 15:08:31 crc kubenswrapper[4624]: I1008 15:08:31.153824 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fz8br" event={"ID":"013caaa2-5e72-421e-b1d7-764ca4cbfe93","Type":"ContainerDied","Data":"d766b54e4d17ab35211c989d8b13aa74d2efddc969cbcfc503e0f869e6785729"} Oct 08 15:08:31 crc kubenswrapper[4624]: I1008 15:08:31.154089 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fz8br" event={"ID":"013caaa2-5e72-421e-b1d7-764ca4cbfe93","Type":"ContainerStarted","Data":"fba2e41ff8bae3e22f5f168d6eb6c68f49f197b5a166236feb6d688c8cc78729"} Oct 08 15:08:31 crc kubenswrapper[4624]: I1008 15:08:31.155824 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 15:08:32 crc kubenswrapper[4624]: I1008 15:08:32.165132 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fz8br" event={"ID":"013caaa2-5e72-421e-b1d7-764ca4cbfe93","Type":"ContainerStarted","Data":"357113bdc1317aae27963caf792009dcf905eb72cd6497b3d00b1aad09cc875f"} Oct 08 15:08:32 crc kubenswrapper[4624]: I1008 15:08:32.968125 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sl5bj"] Oct 08 15:08:32 crc kubenswrapper[4624]: I1008 15:08:32.970416 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sl5bj" Oct 08 15:08:32 crc kubenswrapper[4624]: I1008 15:08:32.991339 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sl5bj"] Oct 08 15:08:33 crc kubenswrapper[4624]: I1008 15:08:33.038428 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f-catalog-content\") pod \"redhat-operators-sl5bj\" (UID: \"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f\") " pod="openshift-marketplace/redhat-operators-sl5bj" Oct 08 15:08:33 crc kubenswrapper[4624]: I1008 15:08:33.038841 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5hhp\" (UniqueName: \"kubernetes.io/projected/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f-kube-api-access-f5hhp\") pod \"redhat-operators-sl5bj\" (UID: \"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f\") " pod="openshift-marketplace/redhat-operators-sl5bj" Oct 08 15:08:33 crc kubenswrapper[4624]: I1008 15:08:33.038970 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f-utilities\") pod \"redhat-operators-sl5bj\" (UID: \"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f\") " pod="openshift-marketplace/redhat-operators-sl5bj" Oct 08 15:08:33 crc kubenswrapper[4624]: I1008 15:08:33.141121 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f-utilities\") pod \"redhat-operators-sl5bj\" (UID: \"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f\") " pod="openshift-marketplace/redhat-operators-sl5bj" Oct 08 15:08:33 crc kubenswrapper[4624]: I1008 15:08:33.141217 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f-catalog-content\") pod \"redhat-operators-sl5bj\" (UID: \"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f\") " pod="openshift-marketplace/redhat-operators-sl5bj" Oct 08 15:08:33 crc kubenswrapper[4624]: I1008 15:08:33.141274 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5hhp\" (UniqueName: \"kubernetes.io/projected/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f-kube-api-access-f5hhp\") pod \"redhat-operators-sl5bj\" (UID: \"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f\") " pod="openshift-marketplace/redhat-operators-sl5bj" Oct 08 15:08:33 crc kubenswrapper[4624]: I1008 15:08:33.142121 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f-utilities\") pod \"redhat-operators-sl5bj\" (UID: \"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f\") " pod="openshift-marketplace/redhat-operators-sl5bj" Oct 08 15:08:33 crc kubenswrapper[4624]: I1008 15:08:33.142249 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f-catalog-content\") pod \"redhat-operators-sl5bj\" (UID: \"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f\") " pod="openshift-marketplace/redhat-operators-sl5bj" Oct 08 15:08:33 crc kubenswrapper[4624]: I1008 15:08:33.186457 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5hhp\" (UniqueName: \"kubernetes.io/projected/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f-kube-api-access-f5hhp\") pod \"redhat-operators-sl5bj\" (UID: \"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f\") " pod="openshift-marketplace/redhat-operators-sl5bj" Oct 08 15:08:33 crc kubenswrapper[4624]: I1008 15:08:33.293857 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sl5bj" Oct 08 15:08:33 crc kubenswrapper[4624]: I1008 15:08:33.822806 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sl5bj"] Oct 08 15:08:33 crc kubenswrapper[4624]: W1008 15:08:33.828327 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf716990a_5ffa_4f5c_89fb_3c7dc6ee554f.slice/crio-b3c770d1bc41b03afc24ed4a4b4d21b4e415d3b29070d1bec7381976fbdbd37a WatchSource:0}: Error finding container b3c770d1bc41b03afc24ed4a4b4d21b4e415d3b29070d1bec7381976fbdbd37a: Status 404 returned error can't find the container with id b3c770d1bc41b03afc24ed4a4b4d21b4e415d3b29070d1bec7381976fbdbd37a Oct 08 15:08:34 crc kubenswrapper[4624]: I1008 15:08:34.222269 4624 generic.go:334] "Generic (PLEG): container finished" podID="013caaa2-5e72-421e-b1d7-764ca4cbfe93" containerID="357113bdc1317aae27963caf792009dcf905eb72cd6497b3d00b1aad09cc875f" exitCode=0 Oct 08 15:08:34 crc kubenswrapper[4624]: I1008 15:08:34.230008 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fz8br" event={"ID":"013caaa2-5e72-421e-b1d7-764ca4cbfe93","Type":"ContainerDied","Data":"357113bdc1317aae27963caf792009dcf905eb72cd6497b3d00b1aad09cc875f"} Oct 08 15:08:34 crc kubenswrapper[4624]: I1008 15:08:34.245013 4624 generic.go:334] "Generic (PLEG): container finished" podID="f716990a-5ffa-4f5c-89fb-3c7dc6ee554f" containerID="78bf52fae81e9868f0e9f3bc4be40105e1886030d243d23cc4c5a0810c164aac" exitCode=0 Oct 08 15:08:34 crc kubenswrapper[4624]: I1008 15:08:34.245057 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sl5bj" event={"ID":"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f","Type":"ContainerDied","Data":"78bf52fae81e9868f0e9f3bc4be40105e1886030d243d23cc4c5a0810c164aac"} Oct 08 15:08:34 crc kubenswrapper[4624]: I1008 15:08:34.245087 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sl5bj" event={"ID":"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f","Type":"ContainerStarted","Data":"b3c770d1bc41b03afc24ed4a4b4d21b4e415d3b29070d1bec7381976fbdbd37a"} Oct 08 15:08:35 crc kubenswrapper[4624]: I1008 15:08:35.255335 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fz8br" event={"ID":"013caaa2-5e72-421e-b1d7-764ca4cbfe93","Type":"ContainerStarted","Data":"de7011f4286e8977722adcee53192ebef772cf09a4e7de50d07712a72128cb61"} Oct 08 15:08:35 crc kubenswrapper[4624]: I1008 15:08:35.277594 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fz8br" podStartSLOduration=2.616628154 podStartE2EDuration="6.277573709s" podCreationTimestamp="2025-10-08 15:08:29 +0000 UTC" firstStartedPulling="2025-10-08 15:08:31.15547353 +0000 UTC m=+2736.306408607" lastFinishedPulling="2025-10-08 15:08:34.816419085 +0000 UTC m=+2739.967354162" observedRunningTime="2025-10-08 15:08:35.270781085 +0000 UTC m=+2740.421716162" watchObservedRunningTime="2025-10-08 15:08:35.277573709 +0000 UTC m=+2740.428508786" Oct 08 15:08:36 crc kubenswrapper[4624]: I1008 15:08:36.264812 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sl5bj" event={"ID":"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f","Type":"ContainerStarted","Data":"71302629342f7a3b16ee96706b354deb465706d70546f95bacb69fbff4a0bc3a"} Oct 08 15:08:39 crc kubenswrapper[4624]: I1008 15:08:39.292736 4624 generic.go:334] "Generic (PLEG): container finished" podID="f716990a-5ffa-4f5c-89fb-3c7dc6ee554f" containerID="71302629342f7a3b16ee96706b354deb465706d70546f95bacb69fbff4a0bc3a" exitCode=0 Oct 08 15:08:39 crc kubenswrapper[4624]: I1008 15:08:39.292847 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sl5bj" event={"ID":"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f","Type":"ContainerDied","Data":"71302629342f7a3b16ee96706b354deb465706d70546f95bacb69fbff4a0bc3a"} Oct 08 15:08:40 crc kubenswrapper[4624]: I1008 15:08:40.291976 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fz8br" Oct 08 15:08:40 crc kubenswrapper[4624]: I1008 15:08:40.292410 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fz8br" Oct 08 15:08:40 crc kubenswrapper[4624]: I1008 15:08:40.305777 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sl5bj" event={"ID":"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f","Type":"ContainerStarted","Data":"2c4d8d49fa0a1b3de56dac70fa9d400ca7e483c8aaea8cc611708d5014ebf64a"} Oct 08 15:08:40 crc kubenswrapper[4624]: I1008 15:08:40.327209 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sl5bj" podStartSLOduration=2.756603412 podStartE2EDuration="8.327190121s" podCreationTimestamp="2025-10-08 15:08:32 +0000 UTC" firstStartedPulling="2025-10-08 15:08:34.248817728 +0000 UTC m=+2739.399752805" lastFinishedPulling="2025-10-08 15:08:39.819404447 +0000 UTC m=+2744.970339514" observedRunningTime="2025-10-08 15:08:40.32626427 +0000 UTC m=+2745.477199347" watchObservedRunningTime="2025-10-08 15:08:40.327190121 +0000 UTC m=+2745.478125198" Oct 08 15:08:41 crc kubenswrapper[4624]: I1008 15:08:41.342975 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fz8br" podUID="013caaa2-5e72-421e-b1d7-764ca4cbfe93" containerName="registry-server" probeResult="failure" output=< Oct 08 15:08:41 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:08:41 crc kubenswrapper[4624]: > Oct 08 15:08:43 crc kubenswrapper[4624]: I1008 15:08:43.294481 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sl5bj" Oct 08 15:08:43 crc kubenswrapper[4624]: I1008 15:08:43.294889 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sl5bj" Oct 08 15:08:44 crc kubenswrapper[4624]: I1008 15:08:44.340940 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sl5bj" podUID="f716990a-5ffa-4f5c-89fb-3c7dc6ee554f" containerName="registry-server" probeResult="failure" output=< Oct 08 15:08:44 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:08:44 crc kubenswrapper[4624]: > Oct 08 15:08:50 crc kubenswrapper[4624]: I1008 15:08:50.340556 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fz8br" Oct 08 15:08:50 crc kubenswrapper[4624]: I1008 15:08:50.393800 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fz8br" Oct 08 15:08:52 crc kubenswrapper[4624]: I1008 15:08:52.857284 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fz8br"] Oct 08 15:08:52 crc kubenswrapper[4624]: I1008 15:08:52.859113 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fz8br" podUID="013caaa2-5e72-421e-b1d7-764ca4cbfe93" containerName="registry-server" containerID="cri-o://de7011f4286e8977722adcee53192ebef772cf09a4e7de50d07712a72128cb61" gracePeriod=2 Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.306182 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fz8br" Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.434856 4624 generic.go:334] "Generic (PLEG): container finished" podID="013caaa2-5e72-421e-b1d7-764ca4cbfe93" containerID="de7011f4286e8977722adcee53192ebef772cf09a4e7de50d07712a72128cb61" exitCode=0 Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.434897 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fz8br" event={"ID":"013caaa2-5e72-421e-b1d7-764ca4cbfe93","Type":"ContainerDied","Data":"de7011f4286e8977722adcee53192ebef772cf09a4e7de50d07712a72128cb61"} Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.434924 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fz8br" event={"ID":"013caaa2-5e72-421e-b1d7-764ca4cbfe93","Type":"ContainerDied","Data":"fba2e41ff8bae3e22f5f168d6eb6c68f49f197b5a166236feb6d688c8cc78729"} Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.434940 4624 scope.go:117] "RemoveContainer" containerID="de7011f4286e8977722adcee53192ebef772cf09a4e7de50d07712a72128cb61" Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.435074 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fz8br" Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.452772 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013caaa2-5e72-421e-b1d7-764ca4cbfe93-catalog-content\") pod \"013caaa2-5e72-421e-b1d7-764ca4cbfe93\" (UID: \"013caaa2-5e72-421e-b1d7-764ca4cbfe93\") " Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.452822 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013caaa2-5e72-421e-b1d7-764ca4cbfe93-utilities\") pod \"013caaa2-5e72-421e-b1d7-764ca4cbfe93\" (UID: \"013caaa2-5e72-421e-b1d7-764ca4cbfe93\") " Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.452989 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g26gj\" (UniqueName: \"kubernetes.io/projected/013caaa2-5e72-421e-b1d7-764ca4cbfe93-kube-api-access-g26gj\") pod \"013caaa2-5e72-421e-b1d7-764ca4cbfe93\" (UID: \"013caaa2-5e72-421e-b1d7-764ca4cbfe93\") " Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.453749 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/013caaa2-5e72-421e-b1d7-764ca4cbfe93-utilities" (OuterVolumeSpecName: "utilities") pod "013caaa2-5e72-421e-b1d7-764ca4cbfe93" (UID: "013caaa2-5e72-421e-b1d7-764ca4cbfe93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.459689 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/013caaa2-5e72-421e-b1d7-764ca4cbfe93-kube-api-access-g26gj" (OuterVolumeSpecName: "kube-api-access-g26gj") pod "013caaa2-5e72-421e-b1d7-764ca4cbfe93" (UID: "013caaa2-5e72-421e-b1d7-764ca4cbfe93"). InnerVolumeSpecName "kube-api-access-g26gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.461821 4624 scope.go:117] "RemoveContainer" containerID="357113bdc1317aae27963caf792009dcf905eb72cd6497b3d00b1aad09cc875f" Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.506463 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/013caaa2-5e72-421e-b1d7-764ca4cbfe93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "013caaa2-5e72-421e-b1d7-764ca4cbfe93" (UID: "013caaa2-5e72-421e-b1d7-764ca4cbfe93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.518543 4624 scope.go:117] "RemoveContainer" containerID="d766b54e4d17ab35211c989d8b13aa74d2efddc969cbcfc503e0f869e6785729" Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.556102 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g26gj\" (UniqueName: \"kubernetes.io/projected/013caaa2-5e72-421e-b1d7-764ca4cbfe93-kube-api-access-g26gj\") on node \"crc\" DevicePath \"\"" Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.556585 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013caaa2-5e72-421e-b1d7-764ca4cbfe93-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.556622 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013caaa2-5e72-421e-b1d7-764ca4cbfe93-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.561299 4624 scope.go:117] "RemoveContainer" containerID="de7011f4286e8977722adcee53192ebef772cf09a4e7de50d07712a72128cb61" Oct 08 15:08:53 crc kubenswrapper[4624]: E1008 15:08:53.561801 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de7011f4286e8977722adcee53192ebef772cf09a4e7de50d07712a72128cb61\": container with ID starting with de7011f4286e8977722adcee53192ebef772cf09a4e7de50d07712a72128cb61 not found: ID does not exist" containerID="de7011f4286e8977722adcee53192ebef772cf09a4e7de50d07712a72128cb61" Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.561834 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de7011f4286e8977722adcee53192ebef772cf09a4e7de50d07712a72128cb61"} err="failed to get container status \"de7011f4286e8977722adcee53192ebef772cf09a4e7de50d07712a72128cb61\": rpc error: code = NotFound desc = could not find container \"de7011f4286e8977722adcee53192ebef772cf09a4e7de50d07712a72128cb61\": container with ID starting with de7011f4286e8977722adcee53192ebef772cf09a4e7de50d07712a72128cb61 not found: ID does not exist" Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.561857 4624 scope.go:117] "RemoveContainer" containerID="357113bdc1317aae27963caf792009dcf905eb72cd6497b3d00b1aad09cc875f" Oct 08 15:08:53 crc kubenswrapper[4624]: E1008 15:08:53.562735 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"357113bdc1317aae27963caf792009dcf905eb72cd6497b3d00b1aad09cc875f\": container with ID starting with 357113bdc1317aae27963caf792009dcf905eb72cd6497b3d00b1aad09cc875f not found: ID does not exist" containerID="357113bdc1317aae27963caf792009dcf905eb72cd6497b3d00b1aad09cc875f" Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.562769 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"357113bdc1317aae27963caf792009dcf905eb72cd6497b3d00b1aad09cc875f"} err="failed to get container status \"357113bdc1317aae27963caf792009dcf905eb72cd6497b3d00b1aad09cc875f\": rpc error: code = NotFound desc = could not find container \"357113bdc1317aae27963caf792009dcf905eb72cd6497b3d00b1aad09cc875f\": container with ID starting with 357113bdc1317aae27963caf792009dcf905eb72cd6497b3d00b1aad09cc875f not found: ID does not exist" Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.562798 4624 scope.go:117] "RemoveContainer" containerID="d766b54e4d17ab35211c989d8b13aa74d2efddc969cbcfc503e0f869e6785729" Oct 08 15:08:53 crc kubenswrapper[4624]: E1008 15:08:53.563163 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d766b54e4d17ab35211c989d8b13aa74d2efddc969cbcfc503e0f869e6785729\": container with ID starting with d766b54e4d17ab35211c989d8b13aa74d2efddc969cbcfc503e0f869e6785729 not found: ID does not exist" containerID="d766b54e4d17ab35211c989d8b13aa74d2efddc969cbcfc503e0f869e6785729" Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.563194 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d766b54e4d17ab35211c989d8b13aa74d2efddc969cbcfc503e0f869e6785729"} err="failed to get container status \"d766b54e4d17ab35211c989d8b13aa74d2efddc969cbcfc503e0f869e6785729\": rpc error: code = NotFound desc = could not find container \"d766b54e4d17ab35211c989d8b13aa74d2efddc969cbcfc503e0f869e6785729\": container with ID starting with d766b54e4d17ab35211c989d8b13aa74d2efddc969cbcfc503e0f869e6785729 not found: ID does not exist" Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.769297 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fz8br"] Oct 08 15:08:53 crc kubenswrapper[4624]: I1008 15:08:53.777976 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fz8br"] Oct 08 15:08:54 crc kubenswrapper[4624]: I1008 15:08:54.342490 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sl5bj" podUID="f716990a-5ffa-4f5c-89fb-3c7dc6ee554f" containerName="registry-server" probeResult="failure" output=< Oct 08 15:08:54 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:08:54 crc kubenswrapper[4624]: > Oct 08 15:08:55 crc kubenswrapper[4624]: I1008 15:08:55.481901 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="013caaa2-5e72-421e-b1d7-764ca4cbfe93" path="/var/lib/kubelet/pods/013caaa2-5e72-421e-b1d7-764ca4cbfe93/volumes" Oct 08 15:09:00 crc kubenswrapper[4624]: I1008 15:09:00.076282 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:09:00 crc kubenswrapper[4624]: I1008 15:09:00.076761 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:09:00 crc kubenswrapper[4624]: I1008 15:09:00.076802 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 15:09:00 crc kubenswrapper[4624]: I1008 15:09:00.077553 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5d7823785dcb4ecdee0774c306488753aa8247a7bff5383e00bc64045d4d1d2"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 15:09:00 crc kubenswrapper[4624]: I1008 15:09:00.077611 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://e5d7823785dcb4ecdee0774c306488753aa8247a7bff5383e00bc64045d4d1d2" gracePeriod=600 Oct 08 15:09:00 crc kubenswrapper[4624]: I1008 15:09:00.521220 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="e5d7823785dcb4ecdee0774c306488753aa8247a7bff5383e00bc64045d4d1d2" exitCode=0 Oct 08 15:09:00 crc kubenswrapper[4624]: I1008 15:09:00.521309 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"e5d7823785dcb4ecdee0774c306488753aa8247a7bff5383e00bc64045d4d1d2"} Oct 08 15:09:00 crc kubenswrapper[4624]: I1008 15:09:00.521580 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189"} Oct 08 15:09:00 crc kubenswrapper[4624]: I1008 15:09:00.521602 4624 scope.go:117] "RemoveContainer" containerID="19a19a076caf855d6e9f38047b1847532855b6cd487e517d0b80c83a4860f927" Oct 08 15:09:03 crc kubenswrapper[4624]: I1008 15:09:03.336551 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sl5bj" Oct 08 15:09:03 crc kubenswrapper[4624]: I1008 15:09:03.390249 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sl5bj" Oct 08 15:09:04 crc kubenswrapper[4624]: I1008 15:09:04.169193 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sl5bj"] Oct 08 15:09:04 crc kubenswrapper[4624]: I1008 15:09:04.561100 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sl5bj" podUID="f716990a-5ffa-4f5c-89fb-3c7dc6ee554f" containerName="registry-server" containerID="cri-o://2c4d8d49fa0a1b3de56dac70fa9d400ca7e483c8aaea8cc611708d5014ebf64a" gracePeriod=2 Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.095471 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sl5bj" Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.219931 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5hhp\" (UniqueName: \"kubernetes.io/projected/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f-kube-api-access-f5hhp\") pod \"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f\" (UID: \"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f\") " Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.220433 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f-catalog-content\") pod \"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f\" (UID: \"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f\") " Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.220542 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f-utilities\") pod \"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f\" (UID: \"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f\") " Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.221328 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f-utilities" (OuterVolumeSpecName: "utilities") pod "f716990a-5ffa-4f5c-89fb-3c7dc6ee554f" (UID: "f716990a-5ffa-4f5c-89fb-3c7dc6ee554f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.225558 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f-kube-api-access-f5hhp" (OuterVolumeSpecName: "kube-api-access-f5hhp") pod "f716990a-5ffa-4f5c-89fb-3c7dc6ee554f" (UID: "f716990a-5ffa-4f5c-89fb-3c7dc6ee554f"). InnerVolumeSpecName "kube-api-access-f5hhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.304695 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f716990a-5ffa-4f5c-89fb-3c7dc6ee554f" (UID: "f716990a-5ffa-4f5c-89fb-3c7dc6ee554f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.322318 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.322352 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5hhp\" (UniqueName: \"kubernetes.io/projected/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f-kube-api-access-f5hhp\") on node \"crc\" DevicePath \"\"" Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.322366 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.570557 4624 generic.go:334] "Generic (PLEG): container finished" podID="f716990a-5ffa-4f5c-89fb-3c7dc6ee554f" containerID="2c4d8d49fa0a1b3de56dac70fa9d400ca7e483c8aaea8cc611708d5014ebf64a" exitCode=0 Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.570603 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sl5bj" event={"ID":"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f","Type":"ContainerDied","Data":"2c4d8d49fa0a1b3de56dac70fa9d400ca7e483c8aaea8cc611708d5014ebf64a"} Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.570620 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sl5bj" Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.570704 4624 scope.go:117] "RemoveContainer" containerID="2c4d8d49fa0a1b3de56dac70fa9d400ca7e483c8aaea8cc611708d5014ebf64a" Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.570632 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sl5bj" event={"ID":"f716990a-5ffa-4f5c-89fb-3c7dc6ee554f","Type":"ContainerDied","Data":"b3c770d1bc41b03afc24ed4a4b4d21b4e415d3b29070d1bec7381976fbdbd37a"} Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.595665 4624 scope.go:117] "RemoveContainer" containerID="71302629342f7a3b16ee96706b354deb465706d70546f95bacb69fbff4a0bc3a" Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.596696 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sl5bj"] Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.604226 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sl5bj"] Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.616690 4624 scope.go:117] "RemoveContainer" containerID="78bf52fae81e9868f0e9f3bc4be40105e1886030d243d23cc4c5a0810c164aac" Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.667393 4624 scope.go:117] "RemoveContainer" containerID="2c4d8d49fa0a1b3de56dac70fa9d400ca7e483c8aaea8cc611708d5014ebf64a" Oct 08 15:09:05 crc kubenswrapper[4624]: E1008 15:09:05.670692 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c4d8d49fa0a1b3de56dac70fa9d400ca7e483c8aaea8cc611708d5014ebf64a\": container with ID starting with 2c4d8d49fa0a1b3de56dac70fa9d400ca7e483c8aaea8cc611708d5014ebf64a not found: ID does not exist" containerID="2c4d8d49fa0a1b3de56dac70fa9d400ca7e483c8aaea8cc611708d5014ebf64a" Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.670723 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c4d8d49fa0a1b3de56dac70fa9d400ca7e483c8aaea8cc611708d5014ebf64a"} err="failed to get container status \"2c4d8d49fa0a1b3de56dac70fa9d400ca7e483c8aaea8cc611708d5014ebf64a\": rpc error: code = NotFound desc = could not find container \"2c4d8d49fa0a1b3de56dac70fa9d400ca7e483c8aaea8cc611708d5014ebf64a\": container with ID starting with 2c4d8d49fa0a1b3de56dac70fa9d400ca7e483c8aaea8cc611708d5014ebf64a not found: ID does not exist" Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.670745 4624 scope.go:117] "RemoveContainer" containerID="71302629342f7a3b16ee96706b354deb465706d70546f95bacb69fbff4a0bc3a" Oct 08 15:09:05 crc kubenswrapper[4624]: E1008 15:09:05.671026 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71302629342f7a3b16ee96706b354deb465706d70546f95bacb69fbff4a0bc3a\": container with ID starting with 71302629342f7a3b16ee96706b354deb465706d70546f95bacb69fbff4a0bc3a not found: ID does not exist" containerID="71302629342f7a3b16ee96706b354deb465706d70546f95bacb69fbff4a0bc3a" Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.671043 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71302629342f7a3b16ee96706b354deb465706d70546f95bacb69fbff4a0bc3a"} err="failed to get container status \"71302629342f7a3b16ee96706b354deb465706d70546f95bacb69fbff4a0bc3a\": rpc error: code = NotFound desc = could not find container \"71302629342f7a3b16ee96706b354deb465706d70546f95bacb69fbff4a0bc3a\": container with ID starting with 71302629342f7a3b16ee96706b354deb465706d70546f95bacb69fbff4a0bc3a not found: ID does not exist" Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.671055 4624 scope.go:117] "RemoveContainer" containerID="78bf52fae81e9868f0e9f3bc4be40105e1886030d243d23cc4c5a0810c164aac" Oct 08 15:09:05 crc kubenswrapper[4624]: E1008 15:09:05.672362 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78bf52fae81e9868f0e9f3bc4be40105e1886030d243d23cc4c5a0810c164aac\": container with ID starting with 78bf52fae81e9868f0e9f3bc4be40105e1886030d243d23cc4c5a0810c164aac not found: ID does not exist" containerID="78bf52fae81e9868f0e9f3bc4be40105e1886030d243d23cc4c5a0810c164aac" Oct 08 15:09:05 crc kubenswrapper[4624]: I1008 15:09:05.672412 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78bf52fae81e9868f0e9f3bc4be40105e1886030d243d23cc4c5a0810c164aac"} err="failed to get container status \"78bf52fae81e9868f0e9f3bc4be40105e1886030d243d23cc4c5a0810c164aac\": rpc error: code = NotFound desc = could not find container \"78bf52fae81e9868f0e9f3bc4be40105e1886030d243d23cc4c5a0810c164aac\": container with ID starting with 78bf52fae81e9868f0e9f3bc4be40105e1886030d243d23cc4c5a0810c164aac not found: ID does not exist" Oct 08 15:09:07 crc kubenswrapper[4624]: I1008 15:09:07.478243 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f716990a-5ffa-4f5c-89fb-3c7dc6ee554f" path="/var/lib/kubelet/pods/f716990a-5ffa-4f5c-89fb-3c7dc6ee554f/volumes" Oct 08 15:09:28 crc kubenswrapper[4624]: I1008 15:09:28.786028 4624 generic.go:334] "Generic (PLEG): container finished" podID="ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b" containerID="2d2b619ebbb803dd6c6e617ac78c34bab6935ec7d1dcabb9321f5daf50bdd66a" exitCode=0 Oct 08 15:09:28 crc kubenswrapper[4624]: I1008 15:09:28.786124 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" event={"ID":"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b","Type":"ContainerDied","Data":"2d2b619ebbb803dd6c6e617ac78c34bab6935ec7d1dcabb9321f5daf50bdd66a"} Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.185736 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.339847 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-cell1-compute-config-0\") pod \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.339935 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-extra-config-0\") pod \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.339978 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-migration-ssh-key-0\") pod \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.340068 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-ssh-key\") pod \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.340164 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-migration-ssh-key-1\") pod \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.340189 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-cell1-compute-config-1\") pod \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.340229 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn7kz\" (UniqueName: \"kubernetes.io/projected/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-kube-api-access-zn7kz\") pod \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.340287 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-combined-ca-bundle\") pod \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.340358 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-inventory\") pod \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\" (UID: \"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b\") " Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.386231 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b" (UID: "ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.386279 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-kube-api-access-zn7kz" (OuterVolumeSpecName: "kube-api-access-zn7kz") pod "ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b" (UID: "ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b"). InnerVolumeSpecName "kube-api-access-zn7kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.391180 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-inventory" (OuterVolumeSpecName: "inventory") pod "ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b" (UID: "ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.397413 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b" (UID: "ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.402348 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b" (UID: "ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.426603 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b" (UID: "ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.429935 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b" (UID: "ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.435873 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b" (UID: "ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.440424 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b" (UID: "ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.442307 4624 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.442336 4624 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.442349 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn7kz\" (UniqueName: \"kubernetes.io/projected/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-kube-api-access-zn7kz\") on node \"crc\" DevicePath \"\"" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.442361 4624 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.442372 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.442382 4624 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.442393 4624 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.442404 4624 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.442414 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.804416 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" event={"ID":"ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b","Type":"ContainerDied","Data":"dee3e046634272ecc957fd9d6c4047d12649a26c7ec376b6d9addc37a50e42d6"} Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.804452 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dee3e046634272ecc957fd9d6c4047d12649a26c7ec376b6d9addc37a50e42d6" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.804513 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58j7v" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.915571 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq"] Oct 08 15:09:30 crc kubenswrapper[4624]: E1008 15:09:30.916113 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013caaa2-5e72-421e-b1d7-764ca4cbfe93" containerName="registry-server" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.916140 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="013caaa2-5e72-421e-b1d7-764ca4cbfe93" containerName="registry-server" Oct 08 15:09:30 crc kubenswrapper[4624]: E1008 15:09:30.916166 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f716990a-5ffa-4f5c-89fb-3c7dc6ee554f" containerName="extract-utilities" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.916175 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f716990a-5ffa-4f5c-89fb-3c7dc6ee554f" containerName="extract-utilities" Oct 08 15:09:30 crc kubenswrapper[4624]: E1008 15:09:30.916194 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f716990a-5ffa-4f5c-89fb-3c7dc6ee554f" containerName="extract-content" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.916203 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f716990a-5ffa-4f5c-89fb-3c7dc6ee554f" containerName="extract-content" Oct 08 15:09:30 crc kubenswrapper[4624]: E1008 15:09:30.916236 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013caaa2-5e72-421e-b1d7-764ca4cbfe93" containerName="extract-utilities" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.916244 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="013caaa2-5e72-421e-b1d7-764ca4cbfe93" containerName="extract-utilities" Oct 08 15:09:30 crc kubenswrapper[4624]: E1008 15:09:30.916256 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013caaa2-5e72-421e-b1d7-764ca4cbfe93" containerName="extract-content" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.916264 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="013caaa2-5e72-421e-b1d7-764ca4cbfe93" containerName="extract-content" Oct 08 15:09:30 crc kubenswrapper[4624]: E1008 15:09:30.916284 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.916292 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 08 15:09:30 crc kubenswrapper[4624]: E1008 15:09:30.916310 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f716990a-5ffa-4f5c-89fb-3c7dc6ee554f" containerName="registry-server" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.916318 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f716990a-5ffa-4f5c-89fb-3c7dc6ee554f" containerName="registry-server" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.916551 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.916574 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f716990a-5ffa-4f5c-89fb-3c7dc6ee554f" containerName="registry-server" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.916597 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="013caaa2-5e72-421e-b1d7-764ca4cbfe93" containerName="registry-server" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.917565 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.925943 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq"] Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.927129 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.927416 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.927543 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.928061 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 08 15:09:30 crc kubenswrapper[4624]: I1008 15:09:30.928168 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-v6sxt" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.054605 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.054702 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z78nf\" (UniqueName: \"kubernetes.io/projected/9974fb02-7840-402c-af16-db4392849c73-kube-api-access-z78nf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.054778 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.054845 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.054900 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.054928 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.055012 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.156783 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.156837 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.156921 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.156972 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.156999 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z78nf\" (UniqueName: \"kubernetes.io/projected/9974fb02-7840-402c-af16-db4392849c73-kube-api-access-z78nf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.157094 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.157166 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.161827 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.162936 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.163428 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.165058 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.171996 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.173425 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.176888 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z78nf\" (UniqueName: \"kubernetes.io/projected/9974fb02-7840-402c-af16-db4392849c73-kube-api-access-z78nf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lccvq\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.233981 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.788274 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq"] Oct 08 15:09:31 crc kubenswrapper[4624]: I1008 15:09:31.816357 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" event={"ID":"9974fb02-7840-402c-af16-db4392849c73","Type":"ContainerStarted","Data":"7a41883b336bb69f3d03bb198ed2e85edd3d698a7f7b32e4fea90cd2d593d49e"} Oct 08 15:09:33 crc kubenswrapper[4624]: I1008 15:09:33.836312 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" event={"ID":"9974fb02-7840-402c-af16-db4392849c73","Type":"ContainerStarted","Data":"c86f118e170a3441c84ef5376827aea4b57cde66bdc8701f8379259ecac530cb"} Oct 08 15:09:33 crc kubenswrapper[4624]: I1008 15:09:33.858509 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" podStartSLOduration=2.976032583 podStartE2EDuration="3.858464205s" podCreationTimestamp="2025-10-08 15:09:30 +0000 UTC" firstStartedPulling="2025-10-08 15:09:31.80056449 +0000 UTC m=+2796.951499567" lastFinishedPulling="2025-10-08 15:09:32.682996112 +0000 UTC m=+2797.833931189" observedRunningTime="2025-10-08 15:09:33.855344389 +0000 UTC m=+2799.006279466" watchObservedRunningTime="2025-10-08 15:09:33.858464205 +0000 UTC m=+2799.009399292" Oct 08 15:11:00 crc kubenswrapper[4624]: I1008 15:11:00.077001 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:11:00 crc kubenswrapper[4624]: I1008 15:11:00.077716 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:11:30 crc kubenswrapper[4624]: I1008 15:11:30.076454 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:11:30 crc kubenswrapper[4624]: I1008 15:11:30.076993 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:12:00 crc kubenswrapper[4624]: I1008 15:12:00.076346 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:12:00 crc kubenswrapper[4624]: I1008 15:12:00.077282 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:12:00 crc kubenswrapper[4624]: I1008 15:12:00.077356 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 15:12:00 crc kubenswrapper[4624]: I1008 15:12:00.078534 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 15:12:00 crc kubenswrapper[4624]: I1008 15:12:00.078597 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" gracePeriod=600 Oct 08 15:12:00 crc kubenswrapper[4624]: E1008 15:12:00.214108 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:12:01 crc kubenswrapper[4624]: I1008 15:12:01.064218 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" exitCode=0 Oct 08 15:12:01 crc kubenswrapper[4624]: I1008 15:12:01.064395 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189"} Oct 08 15:12:01 crc kubenswrapper[4624]: I1008 15:12:01.064584 4624 scope.go:117] "RemoveContainer" containerID="e5d7823785dcb4ecdee0774c306488753aa8247a7bff5383e00bc64045d4d1d2" Oct 08 15:12:01 crc kubenswrapper[4624]: I1008 15:12:01.065328 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:12:01 crc kubenswrapper[4624]: E1008 15:12:01.065658 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:12:13 crc kubenswrapper[4624]: I1008 15:12:13.466499 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:12:13 crc kubenswrapper[4624]: E1008 15:12:13.467380 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:12:24 crc kubenswrapper[4624]: I1008 15:12:24.466209 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:12:24 crc kubenswrapper[4624]: E1008 15:12:24.467000 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:12:26 crc kubenswrapper[4624]: I1008 15:12:26.919109 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-25486"] Oct 08 15:12:26 crc kubenswrapper[4624]: I1008 15:12:26.921553 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-25486" Oct 08 15:12:26 crc kubenswrapper[4624]: I1008 15:12:26.936441 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-25486"] Oct 08 15:12:27 crc kubenswrapper[4624]: I1008 15:12:27.094979 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39584fe7-daa4-4393-9380-05c9a00d45d6-utilities\") pod \"certified-operators-25486\" (UID: \"39584fe7-daa4-4393-9380-05c9a00d45d6\") " pod="openshift-marketplace/certified-operators-25486" Oct 08 15:12:27 crc kubenswrapper[4624]: I1008 15:12:27.095718 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbpr4\" (UniqueName: \"kubernetes.io/projected/39584fe7-daa4-4393-9380-05c9a00d45d6-kube-api-access-sbpr4\") pod \"certified-operators-25486\" (UID: \"39584fe7-daa4-4393-9380-05c9a00d45d6\") " pod="openshift-marketplace/certified-operators-25486" Oct 08 15:12:27 crc kubenswrapper[4624]: I1008 15:12:27.096166 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39584fe7-daa4-4393-9380-05c9a00d45d6-catalog-content\") pod \"certified-operators-25486\" (UID: \"39584fe7-daa4-4393-9380-05c9a00d45d6\") " pod="openshift-marketplace/certified-operators-25486" Oct 08 15:12:27 crc kubenswrapper[4624]: I1008 15:12:27.198615 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39584fe7-daa4-4393-9380-05c9a00d45d6-catalog-content\") pod \"certified-operators-25486\" (UID: \"39584fe7-daa4-4393-9380-05c9a00d45d6\") " pod="openshift-marketplace/certified-operators-25486" Oct 08 15:12:27 crc kubenswrapper[4624]: I1008 15:12:27.198757 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39584fe7-daa4-4393-9380-05c9a00d45d6-utilities\") pod \"certified-operators-25486\" (UID: \"39584fe7-daa4-4393-9380-05c9a00d45d6\") " pod="openshift-marketplace/certified-operators-25486" Oct 08 15:12:27 crc kubenswrapper[4624]: I1008 15:12:27.198819 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbpr4\" (UniqueName: \"kubernetes.io/projected/39584fe7-daa4-4393-9380-05c9a00d45d6-kube-api-access-sbpr4\") pod \"certified-operators-25486\" (UID: \"39584fe7-daa4-4393-9380-05c9a00d45d6\") " pod="openshift-marketplace/certified-operators-25486" Oct 08 15:12:27 crc kubenswrapper[4624]: I1008 15:12:27.199090 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39584fe7-daa4-4393-9380-05c9a00d45d6-catalog-content\") pod \"certified-operators-25486\" (UID: \"39584fe7-daa4-4393-9380-05c9a00d45d6\") " pod="openshift-marketplace/certified-operators-25486" Oct 08 15:12:27 crc kubenswrapper[4624]: I1008 15:12:27.199382 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39584fe7-daa4-4393-9380-05c9a00d45d6-utilities\") pod \"certified-operators-25486\" (UID: \"39584fe7-daa4-4393-9380-05c9a00d45d6\") " pod="openshift-marketplace/certified-operators-25486" Oct 08 15:12:27 crc kubenswrapper[4624]: I1008 15:12:27.241607 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbpr4\" (UniqueName: \"kubernetes.io/projected/39584fe7-daa4-4393-9380-05c9a00d45d6-kube-api-access-sbpr4\") pod \"certified-operators-25486\" (UID: \"39584fe7-daa4-4393-9380-05c9a00d45d6\") " pod="openshift-marketplace/certified-operators-25486" Oct 08 15:12:27 crc kubenswrapper[4624]: I1008 15:12:27.245214 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-25486" Oct 08 15:12:27 crc kubenswrapper[4624]: I1008 15:12:27.799416 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-25486"] Oct 08 15:12:28 crc kubenswrapper[4624]: I1008 15:12:28.303952 4624 generic.go:334] "Generic (PLEG): container finished" podID="39584fe7-daa4-4393-9380-05c9a00d45d6" containerID="f5953297123ccc542150ae67506d579173e5a35b46ccb4d7e772071560b4def9" exitCode=0 Oct 08 15:12:28 crc kubenswrapper[4624]: I1008 15:12:28.304160 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25486" event={"ID":"39584fe7-daa4-4393-9380-05c9a00d45d6","Type":"ContainerDied","Data":"f5953297123ccc542150ae67506d579173e5a35b46ccb4d7e772071560b4def9"} Oct 08 15:12:28 crc kubenswrapper[4624]: I1008 15:12:28.304299 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25486" event={"ID":"39584fe7-daa4-4393-9380-05c9a00d45d6","Type":"ContainerStarted","Data":"0a3e6a066a56ce5c1a974446f0111cddf4c136ed9da2b2368c9c284e95a23c34"} Oct 08 15:12:29 crc kubenswrapper[4624]: I1008 15:12:29.314965 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25486" event={"ID":"39584fe7-daa4-4393-9380-05c9a00d45d6","Type":"ContainerStarted","Data":"4d13a6ce66d05790a42f6ea7b233ac4afe9654ccbb1d2fc96b846ec304cc845a"} Oct 08 15:12:30 crc kubenswrapper[4624]: I1008 15:12:30.325057 4624 generic.go:334] "Generic (PLEG): container finished" podID="39584fe7-daa4-4393-9380-05c9a00d45d6" containerID="4d13a6ce66d05790a42f6ea7b233ac4afe9654ccbb1d2fc96b846ec304cc845a" exitCode=0 Oct 08 15:12:30 crc kubenswrapper[4624]: I1008 15:12:30.325116 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25486" event={"ID":"39584fe7-daa4-4393-9380-05c9a00d45d6","Type":"ContainerDied","Data":"4d13a6ce66d05790a42f6ea7b233ac4afe9654ccbb1d2fc96b846ec304cc845a"} Oct 08 15:12:31 crc kubenswrapper[4624]: I1008 15:12:31.334707 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25486" event={"ID":"39584fe7-daa4-4393-9380-05c9a00d45d6","Type":"ContainerStarted","Data":"79f2e5d914a723874c7f5bd6830d6b28271e4e9c2939505f246bc723f22593ad"} Oct 08 15:12:31 crc kubenswrapper[4624]: I1008 15:12:31.373952 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-25486" podStartSLOduration=2.826199885 podStartE2EDuration="5.373925625s" podCreationTimestamp="2025-10-08 15:12:26 +0000 UTC" firstStartedPulling="2025-10-08 15:12:28.305539571 +0000 UTC m=+2973.456474648" lastFinishedPulling="2025-10-08 15:12:30.853265311 +0000 UTC m=+2976.004200388" observedRunningTime="2025-10-08 15:12:31.361783782 +0000 UTC m=+2976.512718879" watchObservedRunningTime="2025-10-08 15:12:31.373925625 +0000 UTC m=+2976.524860702" Oct 08 15:12:32 crc kubenswrapper[4624]: I1008 15:12:32.343998 4624 generic.go:334] "Generic (PLEG): container finished" podID="9974fb02-7840-402c-af16-db4392849c73" containerID="c86f118e170a3441c84ef5376827aea4b57cde66bdc8701f8379259ecac530cb" exitCode=0 Oct 08 15:12:32 crc kubenswrapper[4624]: I1008 15:12:32.344076 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" event={"ID":"9974fb02-7840-402c-af16-db4392849c73","Type":"ContainerDied","Data":"c86f118e170a3441c84ef5376827aea4b57cde66bdc8701f8379259ecac530cb"} Oct 08 15:12:33 crc kubenswrapper[4624]: I1008 15:12:33.840909 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.039468 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ceilometer-compute-config-data-1\") pod \"9974fb02-7840-402c-af16-db4392849c73\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.039562 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ceilometer-compute-config-data-0\") pod \"9974fb02-7840-402c-af16-db4392849c73\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.039591 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ceilometer-compute-config-data-2\") pod \"9974fb02-7840-402c-af16-db4392849c73\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.039726 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-inventory\") pod \"9974fb02-7840-402c-af16-db4392849c73\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.039753 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ssh-key\") pod \"9974fb02-7840-402c-af16-db4392849c73\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.039838 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-telemetry-combined-ca-bundle\") pod \"9974fb02-7840-402c-af16-db4392849c73\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.039928 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z78nf\" (UniqueName: \"kubernetes.io/projected/9974fb02-7840-402c-af16-db4392849c73-kube-api-access-z78nf\") pod \"9974fb02-7840-402c-af16-db4392849c73\" (UID: \"9974fb02-7840-402c-af16-db4392849c73\") " Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.062585 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9974fb02-7840-402c-af16-db4392849c73-kube-api-access-z78nf" (OuterVolumeSpecName: "kube-api-access-z78nf") pod "9974fb02-7840-402c-af16-db4392849c73" (UID: "9974fb02-7840-402c-af16-db4392849c73"). InnerVolumeSpecName "kube-api-access-z78nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.062618 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "9974fb02-7840-402c-af16-db4392849c73" (UID: "9974fb02-7840-402c-af16-db4392849c73"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.075022 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-inventory" (OuterVolumeSpecName: "inventory") pod "9974fb02-7840-402c-af16-db4392849c73" (UID: "9974fb02-7840-402c-af16-db4392849c73"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.075408 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "9974fb02-7840-402c-af16-db4392849c73" (UID: "9974fb02-7840-402c-af16-db4392849c73"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.077886 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "9974fb02-7840-402c-af16-db4392849c73" (UID: "9974fb02-7840-402c-af16-db4392849c73"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.079487 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9974fb02-7840-402c-af16-db4392849c73" (UID: "9974fb02-7840-402c-af16-db4392849c73"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.083860 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "9974fb02-7840-402c-af16-db4392849c73" (UID: "9974fb02-7840-402c-af16-db4392849c73"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.142256 4624 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.142319 4624 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.142331 4624 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.142344 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.142361 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.142372 4624 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9974fb02-7840-402c-af16-db4392849c73-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.142385 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z78nf\" (UniqueName: \"kubernetes.io/projected/9974fb02-7840-402c-af16-db4392849c73-kube-api-access-z78nf\") on node \"crc\" DevicePath \"\"" Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.367864 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" event={"ID":"9974fb02-7840-402c-af16-db4392849c73","Type":"ContainerDied","Data":"7a41883b336bb69f3d03bb198ed2e85edd3d698a7f7b32e4fea90cd2d593d49e"} Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.367917 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a41883b336bb69f3d03bb198ed2e85edd3d698a7f7b32e4fea90cd2d593d49e" Oct 08 15:12:34 crc kubenswrapper[4624]: I1008 15:12:34.368438 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lccvq" Oct 08 15:12:37 crc kubenswrapper[4624]: I1008 15:12:37.245832 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-25486" Oct 08 15:12:37 crc kubenswrapper[4624]: I1008 15:12:37.246169 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-25486" Oct 08 15:12:37 crc kubenswrapper[4624]: I1008 15:12:37.295070 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-25486" Oct 08 15:12:37 crc kubenswrapper[4624]: I1008 15:12:37.445771 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-25486" Oct 08 15:12:37 crc kubenswrapper[4624]: I1008 15:12:37.536405 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-25486"] Oct 08 15:12:39 crc kubenswrapper[4624]: I1008 15:12:39.413130 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-25486" podUID="39584fe7-daa4-4393-9380-05c9a00d45d6" containerName="registry-server" containerID="cri-o://79f2e5d914a723874c7f5bd6830d6b28271e4e9c2939505f246bc723f22593ad" gracePeriod=2 Oct 08 15:12:39 crc kubenswrapper[4624]: I1008 15:12:39.467060 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:12:39 crc kubenswrapper[4624]: E1008 15:12:39.467536 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:12:40 crc kubenswrapper[4624]: I1008 15:12:40.424079 4624 generic.go:334] "Generic (PLEG): container finished" podID="39584fe7-daa4-4393-9380-05c9a00d45d6" containerID="79f2e5d914a723874c7f5bd6830d6b28271e4e9c2939505f246bc723f22593ad" exitCode=0 Oct 08 15:12:40 crc kubenswrapper[4624]: I1008 15:12:40.424475 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25486" event={"ID":"39584fe7-daa4-4393-9380-05c9a00d45d6","Type":"ContainerDied","Data":"79f2e5d914a723874c7f5bd6830d6b28271e4e9c2939505f246bc723f22593ad"} Oct 08 15:12:40 crc kubenswrapper[4624]: I1008 15:12:40.424508 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25486" event={"ID":"39584fe7-daa4-4393-9380-05c9a00d45d6","Type":"ContainerDied","Data":"0a3e6a066a56ce5c1a974446f0111cddf4c136ed9da2b2368c9c284e95a23c34"} Oct 08 15:12:40 crc kubenswrapper[4624]: I1008 15:12:40.424522 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a3e6a066a56ce5c1a974446f0111cddf4c136ed9da2b2368c9c284e95a23c34" Oct 08 15:12:40 crc kubenswrapper[4624]: I1008 15:12:40.462894 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-25486" Oct 08 15:12:40 crc kubenswrapper[4624]: I1008 15:12:40.467020 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39584fe7-daa4-4393-9380-05c9a00d45d6-catalog-content\") pod \"39584fe7-daa4-4393-9380-05c9a00d45d6\" (UID: \"39584fe7-daa4-4393-9380-05c9a00d45d6\") " Oct 08 15:12:40 crc kubenswrapper[4624]: I1008 15:12:40.467097 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39584fe7-daa4-4393-9380-05c9a00d45d6-utilities\") pod \"39584fe7-daa4-4393-9380-05c9a00d45d6\" (UID: \"39584fe7-daa4-4393-9380-05c9a00d45d6\") " Oct 08 15:12:40 crc kubenswrapper[4624]: I1008 15:12:40.467212 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbpr4\" (UniqueName: \"kubernetes.io/projected/39584fe7-daa4-4393-9380-05c9a00d45d6-kube-api-access-sbpr4\") pod \"39584fe7-daa4-4393-9380-05c9a00d45d6\" (UID: \"39584fe7-daa4-4393-9380-05c9a00d45d6\") " Oct 08 15:12:40 crc kubenswrapper[4624]: I1008 15:12:40.468349 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39584fe7-daa4-4393-9380-05c9a00d45d6-utilities" (OuterVolumeSpecName: "utilities") pod "39584fe7-daa4-4393-9380-05c9a00d45d6" (UID: "39584fe7-daa4-4393-9380-05c9a00d45d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:12:40 crc kubenswrapper[4624]: I1008 15:12:40.476285 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39584fe7-daa4-4393-9380-05c9a00d45d6-kube-api-access-sbpr4" (OuterVolumeSpecName: "kube-api-access-sbpr4") pod "39584fe7-daa4-4393-9380-05c9a00d45d6" (UID: "39584fe7-daa4-4393-9380-05c9a00d45d6"). InnerVolumeSpecName "kube-api-access-sbpr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:12:40 crc kubenswrapper[4624]: I1008 15:12:40.565222 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39584fe7-daa4-4393-9380-05c9a00d45d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39584fe7-daa4-4393-9380-05c9a00d45d6" (UID: "39584fe7-daa4-4393-9380-05c9a00d45d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:12:40 crc kubenswrapper[4624]: I1008 15:12:40.570360 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbpr4\" (UniqueName: \"kubernetes.io/projected/39584fe7-daa4-4393-9380-05c9a00d45d6-kube-api-access-sbpr4\") on node \"crc\" DevicePath \"\"" Oct 08 15:12:40 crc kubenswrapper[4624]: I1008 15:12:40.570389 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39584fe7-daa4-4393-9380-05c9a00d45d6-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:12:40 crc kubenswrapper[4624]: I1008 15:12:40.570399 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39584fe7-daa4-4393-9380-05c9a00d45d6-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:12:41 crc kubenswrapper[4624]: I1008 15:12:41.431124 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-25486" Oct 08 15:12:41 crc kubenswrapper[4624]: I1008 15:12:41.490362 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-25486"] Oct 08 15:12:41 crc kubenswrapper[4624]: I1008 15:12:41.490406 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-25486"] Oct 08 15:12:43 crc kubenswrapper[4624]: I1008 15:12:43.476323 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39584fe7-daa4-4393-9380-05c9a00d45d6" path="/var/lib/kubelet/pods/39584fe7-daa4-4393-9380-05c9a00d45d6/volumes" Oct 08 15:12:54 crc kubenswrapper[4624]: I1008 15:12:54.466939 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:12:54 crc kubenswrapper[4624]: E1008 15:12:54.468043 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:13:08 crc kubenswrapper[4624]: I1008 15:13:08.466597 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:13:08 crc kubenswrapper[4624]: E1008 15:13:08.467363 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:13:22 crc kubenswrapper[4624]: I1008 15:13:22.465733 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:13:22 crc kubenswrapper[4624]: E1008 15:13:22.466697 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:13:36 crc kubenswrapper[4624]: I1008 15:13:36.465607 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:13:36 crc kubenswrapper[4624]: E1008 15:13:36.467607 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.281034 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Oct 08 15:13:43 crc kubenswrapper[4624]: E1008 15:13:43.282003 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39584fe7-daa4-4393-9380-05c9a00d45d6" containerName="registry-server" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.282077 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="39584fe7-daa4-4393-9380-05c9a00d45d6" containerName="registry-server" Oct 08 15:13:43 crc kubenswrapper[4624]: E1008 15:13:43.282094 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39584fe7-daa4-4393-9380-05c9a00d45d6" containerName="extract-content" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.282100 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="39584fe7-daa4-4393-9380-05c9a00d45d6" containerName="extract-content" Oct 08 15:13:43 crc kubenswrapper[4624]: E1008 15:13:43.282132 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39584fe7-daa4-4393-9380-05c9a00d45d6" containerName="extract-utilities" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.282139 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="39584fe7-daa4-4393-9380-05c9a00d45d6" containerName="extract-utilities" Oct 08 15:13:43 crc kubenswrapper[4624]: E1008 15:13:43.282151 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9974fb02-7840-402c-af16-db4392849c73" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.282161 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9974fb02-7840-402c-af16-db4392849c73" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.282342 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9974fb02-7840-402c-af16-db4392849c73" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.282357 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="39584fe7-daa4-4393-9380-05c9a00d45d6" containerName="registry-server" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.283019 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.288336 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.288442 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.288347 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.288782 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-24wbd" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.296595 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.379876 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/391ff9a0-631c-4520-a9f9-80fda37e32a1-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.380007 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/391ff9a0-631c-4520-a9f9-80fda37e32a1-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.380128 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.380156 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/391ff9a0-631c-4520-a9f9-80fda37e32a1-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.380215 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/391ff9a0-631c-4520-a9f9-80fda37e32a1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.380265 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/391ff9a0-631c-4520-a9f9-80fda37e32a1-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.380380 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/391ff9a0-631c-4520-a9f9-80fda37e32a1-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.380417 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xmm6\" (UniqueName: \"kubernetes.io/projected/391ff9a0-631c-4520-a9f9-80fda37e32a1-kube-api-access-7xmm6\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.380481 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/391ff9a0-631c-4520-a9f9-80fda37e32a1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.482291 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/391ff9a0-631c-4520-a9f9-80fda37e32a1-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.482341 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xmm6\" (UniqueName: \"kubernetes.io/projected/391ff9a0-631c-4520-a9f9-80fda37e32a1-kube-api-access-7xmm6\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.482372 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/391ff9a0-631c-4520-a9f9-80fda37e32a1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.482445 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/391ff9a0-631c-4520-a9f9-80fda37e32a1-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.482467 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/391ff9a0-631c-4520-a9f9-80fda37e32a1-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.482516 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.482533 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/391ff9a0-631c-4520-a9f9-80fda37e32a1-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.482559 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/391ff9a0-631c-4520-a9f9-80fda37e32a1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.482589 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/391ff9a0-631c-4520-a9f9-80fda37e32a1-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.483236 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.483279 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/391ff9a0-631c-4520-a9f9-80fda37e32a1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.483308 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/391ff9a0-631c-4520-a9f9-80fda37e32a1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.483384 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/391ff9a0-631c-4520-a9f9-80fda37e32a1-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.484921 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/391ff9a0-631c-4520-a9f9-80fda37e32a1-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.488855 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/391ff9a0-631c-4520-a9f9-80fda37e32a1-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.488970 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/391ff9a0-631c-4520-a9f9-80fda37e32a1-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.489806 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/391ff9a0-631c-4520-a9f9-80fda37e32a1-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.509308 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.511884 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xmm6\" (UniqueName: \"kubernetes.io/projected/391ff9a0-631c-4520-a9f9-80fda37e32a1-kube-api-access-7xmm6\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:43 crc kubenswrapper[4624]: I1008 15:13:43.617777 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 15:13:44 crc kubenswrapper[4624]: I1008 15:13:44.157579 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Oct 08 15:13:44 crc kubenswrapper[4624]: W1008 15:13:44.161028 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod391ff9a0_631c_4520_a9f9_80fda37e32a1.slice/crio-f75e0444f74b3a6065a12886f962d9f85a45cc9e9f858e773a142837b2798721 WatchSource:0}: Error finding container f75e0444f74b3a6065a12886f962d9f85a45cc9e9f858e773a142837b2798721: Status 404 returned error can't find the container with id f75e0444f74b3a6065a12886f962d9f85a45cc9e9f858e773a142837b2798721 Oct 08 15:13:44 crc kubenswrapper[4624]: I1008 15:13:44.165023 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 15:13:45 crc kubenswrapper[4624]: I1008 15:13:45.043919 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"391ff9a0-631c-4520-a9f9-80fda37e32a1","Type":"ContainerStarted","Data":"f75e0444f74b3a6065a12886f962d9f85a45cc9e9f858e773a142837b2798721"} Oct 08 15:13:47 crc kubenswrapper[4624]: I1008 15:13:47.465749 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:13:47 crc kubenswrapper[4624]: E1008 15:13:47.466308 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:14:01 crc kubenswrapper[4624]: I1008 15:14:01.466169 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:14:01 crc kubenswrapper[4624]: E1008 15:14:01.467170 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:14:13 crc kubenswrapper[4624]: I1008 15:14:13.468462 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:14:13 crc kubenswrapper[4624]: E1008 15:14:13.469852 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:14:15 crc kubenswrapper[4624]: E1008 15:14:15.254059 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-tempest-all:b78cfc68a577b1553523c8a70a34e297" Oct 08 15:14:15 crc kubenswrapper[4624]: E1008 15:14:15.254399 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/podified-antelope-centos9/openstack-tempest-all:b78cfc68a577b1553523c8a70a34e297" Oct 08 15:14:15 crc kubenswrapper[4624]: E1008 15:14:15.258834 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:38.102.83.103:5001/podified-antelope-centos9/openstack-tempest-all:b78cfc68a577b1553523c8a70a34e297,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xmm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-multi-thread-testing_openstack(391ff9a0-631c-4520-a9f9-80fda37e32a1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 15:14:15 crc kubenswrapper[4624]: E1008 15:14:15.260201 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="391ff9a0-631c-4520-a9f9-80fda37e32a1" Oct 08 15:14:15 crc kubenswrapper[4624]: E1008 15:14:15.392464 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/podified-antelope-centos9/openstack-tempest-all:b78cfc68a577b1553523c8a70a34e297\\\"\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="391ff9a0-631c-4520-a9f9-80fda37e32a1" Oct 08 15:14:26 crc kubenswrapper[4624]: I1008 15:14:26.465952 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:14:26 crc kubenswrapper[4624]: E1008 15:14:26.466840 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:14:30 crc kubenswrapper[4624]: I1008 15:14:30.547474 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 08 15:14:32 crc kubenswrapper[4624]: I1008 15:14:32.534294 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"391ff9a0-631c-4520-a9f9-80fda37e32a1","Type":"ContainerStarted","Data":"e048aa89321a6e21c35dc2130d6e111e87a36e0b755e45d8501fe740b925ad39"} Oct 08 15:14:32 crc kubenswrapper[4624]: I1008 15:14:32.563593 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podStartSLOduration=4.18519138 podStartE2EDuration="50.563569916s" podCreationTimestamp="2025-10-08 15:13:42 +0000 UTC" firstStartedPulling="2025-10-08 15:13:44.164580575 +0000 UTC m=+3049.315515652" lastFinishedPulling="2025-10-08 15:14:30.542959111 +0000 UTC m=+3095.693894188" observedRunningTime="2025-10-08 15:14:32.554410594 +0000 UTC m=+3097.705345671" watchObservedRunningTime="2025-10-08 15:14:32.563569916 +0000 UTC m=+3097.714504993" Oct 08 15:14:38 crc kubenswrapper[4624]: I1008 15:14:38.417342 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jxzt7"] Oct 08 15:14:38 crc kubenswrapper[4624]: I1008 15:14:38.420616 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxzt7" Oct 08 15:14:38 crc kubenswrapper[4624]: I1008 15:14:38.434108 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxzt7"] Oct 08 15:14:38 crc kubenswrapper[4624]: I1008 15:14:38.562772 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5787c9d1-8039-4d6e-9136-77341d8480af-utilities\") pod \"redhat-marketplace-jxzt7\" (UID: \"5787c9d1-8039-4d6e-9136-77341d8480af\") " pod="openshift-marketplace/redhat-marketplace-jxzt7" Oct 08 15:14:38 crc kubenswrapper[4624]: I1008 15:14:38.562987 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5787c9d1-8039-4d6e-9136-77341d8480af-catalog-content\") pod \"redhat-marketplace-jxzt7\" (UID: \"5787c9d1-8039-4d6e-9136-77341d8480af\") " pod="openshift-marketplace/redhat-marketplace-jxzt7" Oct 08 15:14:38 crc kubenswrapper[4624]: I1008 15:14:38.563010 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwgvr\" (UniqueName: \"kubernetes.io/projected/5787c9d1-8039-4d6e-9136-77341d8480af-kube-api-access-fwgvr\") pod \"redhat-marketplace-jxzt7\" (UID: \"5787c9d1-8039-4d6e-9136-77341d8480af\") " pod="openshift-marketplace/redhat-marketplace-jxzt7" Oct 08 15:14:38 crc kubenswrapper[4624]: I1008 15:14:38.667024 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5787c9d1-8039-4d6e-9136-77341d8480af-utilities\") pod \"redhat-marketplace-jxzt7\" (UID: \"5787c9d1-8039-4d6e-9136-77341d8480af\") " pod="openshift-marketplace/redhat-marketplace-jxzt7" Oct 08 15:14:38 crc kubenswrapper[4624]: I1008 15:14:38.667443 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5787c9d1-8039-4d6e-9136-77341d8480af-catalog-content\") pod \"redhat-marketplace-jxzt7\" (UID: \"5787c9d1-8039-4d6e-9136-77341d8480af\") " pod="openshift-marketplace/redhat-marketplace-jxzt7" Oct 08 15:14:38 crc kubenswrapper[4624]: I1008 15:14:38.667469 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwgvr\" (UniqueName: \"kubernetes.io/projected/5787c9d1-8039-4d6e-9136-77341d8480af-kube-api-access-fwgvr\") pod \"redhat-marketplace-jxzt7\" (UID: \"5787c9d1-8039-4d6e-9136-77341d8480af\") " pod="openshift-marketplace/redhat-marketplace-jxzt7" Oct 08 15:14:38 crc kubenswrapper[4624]: I1008 15:14:38.667540 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5787c9d1-8039-4d6e-9136-77341d8480af-utilities\") pod \"redhat-marketplace-jxzt7\" (UID: \"5787c9d1-8039-4d6e-9136-77341d8480af\") " pod="openshift-marketplace/redhat-marketplace-jxzt7" Oct 08 15:14:38 crc kubenswrapper[4624]: I1008 15:14:38.667806 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5787c9d1-8039-4d6e-9136-77341d8480af-catalog-content\") pod \"redhat-marketplace-jxzt7\" (UID: \"5787c9d1-8039-4d6e-9136-77341d8480af\") " pod="openshift-marketplace/redhat-marketplace-jxzt7" Oct 08 15:14:38 crc kubenswrapper[4624]: I1008 15:14:38.689771 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwgvr\" (UniqueName: \"kubernetes.io/projected/5787c9d1-8039-4d6e-9136-77341d8480af-kube-api-access-fwgvr\") pod \"redhat-marketplace-jxzt7\" (UID: \"5787c9d1-8039-4d6e-9136-77341d8480af\") " pod="openshift-marketplace/redhat-marketplace-jxzt7" Oct 08 15:14:38 crc kubenswrapper[4624]: I1008 15:14:38.740322 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxzt7" Oct 08 15:14:39 crc kubenswrapper[4624]: I1008 15:14:39.272975 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxzt7"] Oct 08 15:14:39 crc kubenswrapper[4624]: I1008 15:14:39.465737 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:14:39 crc kubenswrapper[4624]: E1008 15:14:39.466903 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:14:39 crc kubenswrapper[4624]: I1008 15:14:39.596125 4624 generic.go:334] "Generic (PLEG): container finished" podID="5787c9d1-8039-4d6e-9136-77341d8480af" containerID="a4c977c3cc8cdd96abd993da876801a9aaf6d62a292f24d9df812e82d77e13de" exitCode=0 Oct 08 15:14:39 crc kubenswrapper[4624]: I1008 15:14:39.596175 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxzt7" event={"ID":"5787c9d1-8039-4d6e-9136-77341d8480af","Type":"ContainerDied","Data":"a4c977c3cc8cdd96abd993da876801a9aaf6d62a292f24d9df812e82d77e13de"} Oct 08 15:14:39 crc kubenswrapper[4624]: I1008 15:14:39.596498 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxzt7" event={"ID":"5787c9d1-8039-4d6e-9136-77341d8480af","Type":"ContainerStarted","Data":"4294e7c47d56c254aaa06934cbe77b985e357516a84c6e05989f346709d96888"} Oct 08 15:14:41 crc kubenswrapper[4624]: I1008 15:14:41.617347 4624 generic.go:334] "Generic (PLEG): container finished" podID="5787c9d1-8039-4d6e-9136-77341d8480af" containerID="bd742f4bc677205dd05ce1241d9751b098b2e4518281d9ca695f953ef32c1ed8" exitCode=0 Oct 08 15:14:41 crc kubenswrapper[4624]: I1008 15:14:41.617475 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxzt7" event={"ID":"5787c9d1-8039-4d6e-9136-77341d8480af","Type":"ContainerDied","Data":"bd742f4bc677205dd05ce1241d9751b098b2e4518281d9ca695f953ef32c1ed8"} Oct 08 15:14:42 crc kubenswrapper[4624]: I1008 15:14:42.636597 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxzt7" event={"ID":"5787c9d1-8039-4d6e-9136-77341d8480af","Type":"ContainerStarted","Data":"6e19ce62c98ec615a81c9982743e41aeb2b981e73f511d5514b50cdcedb7a7f8"} Oct 08 15:14:42 crc kubenswrapper[4624]: I1008 15:14:42.657435 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jxzt7" podStartSLOduration=1.850697898 podStartE2EDuration="4.657418062s" podCreationTimestamp="2025-10-08 15:14:38 +0000 UTC" firstStartedPulling="2025-10-08 15:14:39.598249983 +0000 UTC m=+3104.749185060" lastFinishedPulling="2025-10-08 15:14:42.404970147 +0000 UTC m=+3107.555905224" observedRunningTime="2025-10-08 15:14:42.65620607 +0000 UTC m=+3107.807141147" watchObservedRunningTime="2025-10-08 15:14:42.657418062 +0000 UTC m=+3107.808353139" Oct 08 15:14:48 crc kubenswrapper[4624]: I1008 15:14:48.741448 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jxzt7" Oct 08 15:14:48 crc kubenswrapper[4624]: I1008 15:14:48.742066 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jxzt7" Oct 08 15:14:48 crc kubenswrapper[4624]: I1008 15:14:48.789453 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jxzt7" Oct 08 15:14:49 crc kubenswrapper[4624]: I1008 15:14:49.736722 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jxzt7" Oct 08 15:14:49 crc kubenswrapper[4624]: I1008 15:14:49.776512 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxzt7"] Oct 08 15:14:51 crc kubenswrapper[4624]: I1008 15:14:51.710041 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jxzt7" podUID="5787c9d1-8039-4d6e-9136-77341d8480af" containerName="registry-server" containerID="cri-o://6e19ce62c98ec615a81c9982743e41aeb2b981e73f511d5514b50cdcedb7a7f8" gracePeriod=2 Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.166420 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxzt7" Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.226522 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5787c9d1-8039-4d6e-9136-77341d8480af-utilities\") pod \"5787c9d1-8039-4d6e-9136-77341d8480af\" (UID: \"5787c9d1-8039-4d6e-9136-77341d8480af\") " Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.226778 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5787c9d1-8039-4d6e-9136-77341d8480af-catalog-content\") pod \"5787c9d1-8039-4d6e-9136-77341d8480af\" (UID: \"5787c9d1-8039-4d6e-9136-77341d8480af\") " Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.226858 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwgvr\" (UniqueName: \"kubernetes.io/projected/5787c9d1-8039-4d6e-9136-77341d8480af-kube-api-access-fwgvr\") pod \"5787c9d1-8039-4d6e-9136-77341d8480af\" (UID: \"5787c9d1-8039-4d6e-9136-77341d8480af\") " Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.227417 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5787c9d1-8039-4d6e-9136-77341d8480af-utilities" (OuterVolumeSpecName: "utilities") pod "5787c9d1-8039-4d6e-9136-77341d8480af" (UID: "5787c9d1-8039-4d6e-9136-77341d8480af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.234360 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5787c9d1-8039-4d6e-9136-77341d8480af-kube-api-access-fwgvr" (OuterVolumeSpecName: "kube-api-access-fwgvr") pod "5787c9d1-8039-4d6e-9136-77341d8480af" (UID: "5787c9d1-8039-4d6e-9136-77341d8480af"). InnerVolumeSpecName "kube-api-access-fwgvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.239148 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5787c9d1-8039-4d6e-9136-77341d8480af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5787c9d1-8039-4d6e-9136-77341d8480af" (UID: "5787c9d1-8039-4d6e-9136-77341d8480af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.328798 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwgvr\" (UniqueName: \"kubernetes.io/projected/5787c9d1-8039-4d6e-9136-77341d8480af-kube-api-access-fwgvr\") on node \"crc\" DevicePath \"\"" Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.328836 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5787c9d1-8039-4d6e-9136-77341d8480af-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.328846 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5787c9d1-8039-4d6e-9136-77341d8480af-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.721068 4624 generic.go:334] "Generic (PLEG): container finished" podID="5787c9d1-8039-4d6e-9136-77341d8480af" containerID="6e19ce62c98ec615a81c9982743e41aeb2b981e73f511d5514b50cdcedb7a7f8" exitCode=0 Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.721123 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxzt7" event={"ID":"5787c9d1-8039-4d6e-9136-77341d8480af","Type":"ContainerDied","Data":"6e19ce62c98ec615a81c9982743e41aeb2b981e73f511d5514b50cdcedb7a7f8"} Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.721150 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxzt7" event={"ID":"5787c9d1-8039-4d6e-9136-77341d8480af","Type":"ContainerDied","Data":"4294e7c47d56c254aaa06934cbe77b985e357516a84c6e05989f346709d96888"} Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.721167 4624 scope.go:117] "RemoveContainer" containerID="6e19ce62c98ec615a81c9982743e41aeb2b981e73f511d5514b50cdcedb7a7f8" Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.721190 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxzt7" Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.757174 4624 scope.go:117] "RemoveContainer" containerID="bd742f4bc677205dd05ce1241d9751b098b2e4518281d9ca695f953ef32c1ed8" Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.767849 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxzt7"] Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.776760 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxzt7"] Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.803515 4624 scope.go:117] "RemoveContainer" containerID="a4c977c3cc8cdd96abd993da876801a9aaf6d62a292f24d9df812e82d77e13de" Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.836957 4624 scope.go:117] "RemoveContainer" containerID="6e19ce62c98ec615a81c9982743e41aeb2b981e73f511d5514b50cdcedb7a7f8" Oct 08 15:14:52 crc kubenswrapper[4624]: E1008 15:14:52.837338 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e19ce62c98ec615a81c9982743e41aeb2b981e73f511d5514b50cdcedb7a7f8\": container with ID starting with 6e19ce62c98ec615a81c9982743e41aeb2b981e73f511d5514b50cdcedb7a7f8 not found: ID does not exist" containerID="6e19ce62c98ec615a81c9982743e41aeb2b981e73f511d5514b50cdcedb7a7f8" Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.837377 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e19ce62c98ec615a81c9982743e41aeb2b981e73f511d5514b50cdcedb7a7f8"} err="failed to get container status \"6e19ce62c98ec615a81c9982743e41aeb2b981e73f511d5514b50cdcedb7a7f8\": rpc error: code = NotFound desc = could not find container \"6e19ce62c98ec615a81c9982743e41aeb2b981e73f511d5514b50cdcedb7a7f8\": container with ID starting with 6e19ce62c98ec615a81c9982743e41aeb2b981e73f511d5514b50cdcedb7a7f8 not found: ID does not exist" Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.837402 4624 scope.go:117] "RemoveContainer" containerID="bd742f4bc677205dd05ce1241d9751b098b2e4518281d9ca695f953ef32c1ed8" Oct 08 15:14:52 crc kubenswrapper[4624]: E1008 15:14:52.838852 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd742f4bc677205dd05ce1241d9751b098b2e4518281d9ca695f953ef32c1ed8\": container with ID starting with bd742f4bc677205dd05ce1241d9751b098b2e4518281d9ca695f953ef32c1ed8 not found: ID does not exist" containerID="bd742f4bc677205dd05ce1241d9751b098b2e4518281d9ca695f953ef32c1ed8" Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.838882 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd742f4bc677205dd05ce1241d9751b098b2e4518281d9ca695f953ef32c1ed8"} err="failed to get container status \"bd742f4bc677205dd05ce1241d9751b098b2e4518281d9ca695f953ef32c1ed8\": rpc error: code = NotFound desc = could not find container \"bd742f4bc677205dd05ce1241d9751b098b2e4518281d9ca695f953ef32c1ed8\": container with ID starting with bd742f4bc677205dd05ce1241d9751b098b2e4518281d9ca695f953ef32c1ed8 not found: ID does not exist" Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.838901 4624 scope.go:117] "RemoveContainer" containerID="a4c977c3cc8cdd96abd993da876801a9aaf6d62a292f24d9df812e82d77e13de" Oct 08 15:14:52 crc kubenswrapper[4624]: E1008 15:14:52.839155 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4c977c3cc8cdd96abd993da876801a9aaf6d62a292f24d9df812e82d77e13de\": container with ID starting with a4c977c3cc8cdd96abd993da876801a9aaf6d62a292f24d9df812e82d77e13de not found: ID does not exist" containerID="a4c977c3cc8cdd96abd993da876801a9aaf6d62a292f24d9df812e82d77e13de" Oct 08 15:14:52 crc kubenswrapper[4624]: I1008 15:14:52.839189 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4c977c3cc8cdd96abd993da876801a9aaf6d62a292f24d9df812e82d77e13de"} err="failed to get container status \"a4c977c3cc8cdd96abd993da876801a9aaf6d62a292f24d9df812e82d77e13de\": rpc error: code = NotFound desc = could not find container \"a4c977c3cc8cdd96abd993da876801a9aaf6d62a292f24d9df812e82d77e13de\": container with ID starting with a4c977c3cc8cdd96abd993da876801a9aaf6d62a292f24d9df812e82d77e13de not found: ID does not exist" Oct 08 15:14:53 crc kubenswrapper[4624]: I1008 15:14:53.477463 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5787c9d1-8039-4d6e-9136-77341d8480af" path="/var/lib/kubelet/pods/5787c9d1-8039-4d6e-9136-77341d8480af/volumes" Oct 08 15:14:54 crc kubenswrapper[4624]: I1008 15:14:54.465608 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:14:54 crc kubenswrapper[4624]: E1008 15:14:54.465880 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.164282 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc"] Oct 08 15:15:00 crc kubenswrapper[4624]: E1008 15:15:00.165335 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5787c9d1-8039-4d6e-9136-77341d8480af" containerName="extract-content" Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.165351 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="5787c9d1-8039-4d6e-9136-77341d8480af" containerName="extract-content" Oct 08 15:15:00 crc kubenswrapper[4624]: E1008 15:15:00.165379 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5787c9d1-8039-4d6e-9136-77341d8480af" containerName="extract-utilities" Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.165385 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="5787c9d1-8039-4d6e-9136-77341d8480af" containerName="extract-utilities" Oct 08 15:15:00 crc kubenswrapper[4624]: E1008 15:15:00.165409 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5787c9d1-8039-4d6e-9136-77341d8480af" containerName="registry-server" Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.165444 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="5787c9d1-8039-4d6e-9136-77341d8480af" containerName="registry-server" Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.165647 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="5787c9d1-8039-4d6e-9136-77341d8480af" containerName="registry-server" Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.166301 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc" Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.170989 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.181351 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc"] Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.214606 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/413d973e-42e8-4d9f-b5ca-1d093047abfa-secret-volume\") pod \"collect-profiles-29332275-ckwsc\" (UID: \"413d973e-42e8-4d9f-b5ca-1d093047abfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc" Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.214754 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/413d973e-42e8-4d9f-b5ca-1d093047abfa-config-volume\") pod \"collect-profiles-29332275-ckwsc\" (UID: \"413d973e-42e8-4d9f-b5ca-1d093047abfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc" Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.214969 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj688\" (UniqueName: \"kubernetes.io/projected/413d973e-42e8-4d9f-b5ca-1d093047abfa-kube-api-access-hj688\") pod \"collect-profiles-29332275-ckwsc\" (UID: \"413d973e-42e8-4d9f-b5ca-1d093047abfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc" Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.220033 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.317377 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj688\" (UniqueName: \"kubernetes.io/projected/413d973e-42e8-4d9f-b5ca-1d093047abfa-kube-api-access-hj688\") pod \"collect-profiles-29332275-ckwsc\" (UID: \"413d973e-42e8-4d9f-b5ca-1d093047abfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc" Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.317527 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/413d973e-42e8-4d9f-b5ca-1d093047abfa-secret-volume\") pod \"collect-profiles-29332275-ckwsc\" (UID: \"413d973e-42e8-4d9f-b5ca-1d093047abfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc" Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.317625 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/413d973e-42e8-4d9f-b5ca-1d093047abfa-config-volume\") pod \"collect-profiles-29332275-ckwsc\" (UID: \"413d973e-42e8-4d9f-b5ca-1d093047abfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc" Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.319202 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/413d973e-42e8-4d9f-b5ca-1d093047abfa-config-volume\") pod \"collect-profiles-29332275-ckwsc\" (UID: \"413d973e-42e8-4d9f-b5ca-1d093047abfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc" Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.327888 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/413d973e-42e8-4d9f-b5ca-1d093047abfa-secret-volume\") pod \"collect-profiles-29332275-ckwsc\" (UID: \"413d973e-42e8-4d9f-b5ca-1d093047abfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc" Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.341063 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj688\" (UniqueName: \"kubernetes.io/projected/413d973e-42e8-4d9f-b5ca-1d093047abfa-kube-api-access-hj688\") pod \"collect-profiles-29332275-ckwsc\" (UID: \"413d973e-42e8-4d9f-b5ca-1d093047abfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc" Oct 08 15:15:00 crc kubenswrapper[4624]: I1008 15:15:00.543340 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc" Oct 08 15:15:01 crc kubenswrapper[4624]: I1008 15:15:01.030124 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc"] Oct 08 15:15:01 crc kubenswrapper[4624]: I1008 15:15:01.810262 4624 generic.go:334] "Generic (PLEG): container finished" podID="413d973e-42e8-4d9f-b5ca-1d093047abfa" containerID="8f80139a941747d8b436ea10cc9892d5165eeae29cfed193df1e8d7686699546" exitCode=0 Oct 08 15:15:01 crc kubenswrapper[4624]: I1008 15:15:01.810352 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc" event={"ID":"413d973e-42e8-4d9f-b5ca-1d093047abfa","Type":"ContainerDied","Data":"8f80139a941747d8b436ea10cc9892d5165eeae29cfed193df1e8d7686699546"} Oct 08 15:15:01 crc kubenswrapper[4624]: I1008 15:15:01.810676 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc" event={"ID":"413d973e-42e8-4d9f-b5ca-1d093047abfa","Type":"ContainerStarted","Data":"f9b0ab0701cbaf5123113090ae722d3683814f22e985492b94090cc2d0c1c750"} Oct 08 15:15:03 crc kubenswrapper[4624]: I1008 15:15:03.226670 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc" Oct 08 15:15:03 crc kubenswrapper[4624]: I1008 15:15:03.282909 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj688\" (UniqueName: \"kubernetes.io/projected/413d973e-42e8-4d9f-b5ca-1d093047abfa-kube-api-access-hj688\") pod \"413d973e-42e8-4d9f-b5ca-1d093047abfa\" (UID: \"413d973e-42e8-4d9f-b5ca-1d093047abfa\") " Oct 08 15:15:03 crc kubenswrapper[4624]: I1008 15:15:03.283071 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/413d973e-42e8-4d9f-b5ca-1d093047abfa-config-volume\") pod \"413d973e-42e8-4d9f-b5ca-1d093047abfa\" (UID: \"413d973e-42e8-4d9f-b5ca-1d093047abfa\") " Oct 08 15:15:03 crc kubenswrapper[4624]: I1008 15:15:03.283454 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/413d973e-42e8-4d9f-b5ca-1d093047abfa-secret-volume\") pod \"413d973e-42e8-4d9f-b5ca-1d093047abfa\" (UID: \"413d973e-42e8-4d9f-b5ca-1d093047abfa\") " Oct 08 15:15:03 crc kubenswrapper[4624]: I1008 15:15:03.284353 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/413d973e-42e8-4d9f-b5ca-1d093047abfa-config-volume" (OuterVolumeSpecName: "config-volume") pod "413d973e-42e8-4d9f-b5ca-1d093047abfa" (UID: "413d973e-42e8-4d9f-b5ca-1d093047abfa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 15:15:03 crc kubenswrapper[4624]: I1008 15:15:03.298253 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/413d973e-42e8-4d9f-b5ca-1d093047abfa-kube-api-access-hj688" (OuterVolumeSpecName: "kube-api-access-hj688") pod "413d973e-42e8-4d9f-b5ca-1d093047abfa" (UID: "413d973e-42e8-4d9f-b5ca-1d093047abfa"). InnerVolumeSpecName "kube-api-access-hj688". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:15:03 crc kubenswrapper[4624]: I1008 15:15:03.309673 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/413d973e-42e8-4d9f-b5ca-1d093047abfa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "413d973e-42e8-4d9f-b5ca-1d093047abfa" (UID: "413d973e-42e8-4d9f-b5ca-1d093047abfa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:15:03 crc kubenswrapper[4624]: I1008 15:15:03.386931 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj688\" (UniqueName: \"kubernetes.io/projected/413d973e-42e8-4d9f-b5ca-1d093047abfa-kube-api-access-hj688\") on node \"crc\" DevicePath \"\"" Oct 08 15:15:03 crc kubenswrapper[4624]: I1008 15:15:03.386985 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/413d973e-42e8-4d9f-b5ca-1d093047abfa-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 15:15:03 crc kubenswrapper[4624]: I1008 15:15:03.386998 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/413d973e-42e8-4d9f-b5ca-1d093047abfa-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 15:15:03 crc kubenswrapper[4624]: I1008 15:15:03.849001 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc" event={"ID":"413d973e-42e8-4d9f-b5ca-1d093047abfa","Type":"ContainerDied","Data":"f9b0ab0701cbaf5123113090ae722d3683814f22e985492b94090cc2d0c1c750"} Oct 08 15:15:03 crc kubenswrapper[4624]: I1008 15:15:03.849046 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9b0ab0701cbaf5123113090ae722d3683814f22e985492b94090cc2d0c1c750" Oct 08 15:15:03 crc kubenswrapper[4624]: I1008 15:15:03.849167 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc" Oct 08 15:15:04 crc kubenswrapper[4624]: I1008 15:15:04.310546 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d"] Oct 08 15:15:04 crc kubenswrapper[4624]: I1008 15:15:04.318458 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332230-thl4d"] Oct 08 15:15:05 crc kubenswrapper[4624]: I1008 15:15:05.478733 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="281517cb-af20-4881-9124-2a72b4a2a8e6" path="/var/lib/kubelet/pods/281517cb-af20-4881-9124-2a72b4a2a8e6/volumes" Oct 08 15:15:08 crc kubenswrapper[4624]: I1008 15:15:08.466743 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:15:08 crc kubenswrapper[4624]: E1008 15:15:08.467406 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:15:19 crc kubenswrapper[4624]: I1008 15:15:19.465941 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:15:19 crc kubenswrapper[4624]: E1008 15:15:19.466809 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:15:30 crc kubenswrapper[4624]: I1008 15:15:30.466431 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:15:30 crc kubenswrapper[4624]: E1008 15:15:30.467480 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:15:42 crc kubenswrapper[4624]: I1008 15:15:42.281247 4624 scope.go:117] "RemoveContainer" containerID="ae5d9a05e4fd350d22c9c20e003f8dbf27080801e5e12e6321396608c3296ffe" Oct 08 15:15:44 crc kubenswrapper[4624]: I1008 15:15:44.467060 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:15:44 crc kubenswrapper[4624]: E1008 15:15:44.468239 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:15:55 crc kubenswrapper[4624]: I1008 15:15:55.474288 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:15:55 crc kubenswrapper[4624]: E1008 15:15:55.475137 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:16:07 crc kubenswrapper[4624]: I1008 15:16:07.466096 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:16:07 crc kubenswrapper[4624]: E1008 15:16:07.468507 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:16:22 crc kubenswrapper[4624]: I1008 15:16:22.465914 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:16:22 crc kubenswrapper[4624]: E1008 15:16:22.466668 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:16:36 crc kubenswrapper[4624]: I1008 15:16:36.467131 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:16:36 crc kubenswrapper[4624]: E1008 15:16:36.470646 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:16:51 crc kubenswrapper[4624]: I1008 15:16:51.465730 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:16:51 crc kubenswrapper[4624]: E1008 15:16:51.467543 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:17:02 crc kubenswrapper[4624]: I1008 15:17:02.465732 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:17:02 crc kubenswrapper[4624]: I1008 15:17:02.967770 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"b2a3b8e8854c1eef3db1ce6eb26bff455ba6daf2a25aafe06bd729d74f88ee2a"} Oct 08 15:18:42 crc kubenswrapper[4624]: I1008 15:18:42.383712 4624 scope.go:117] "RemoveContainer" containerID="79f2e5d914a723874c7f5bd6830d6b28271e4e9c2939505f246bc723f22593ad" Oct 08 15:18:42 crc kubenswrapper[4624]: I1008 15:18:42.447475 4624 scope.go:117] "RemoveContainer" containerID="4d13a6ce66d05790a42f6ea7b233ac4afe9654ccbb1d2fc96b846ec304cc845a" Oct 08 15:18:42 crc kubenswrapper[4624]: I1008 15:18:42.483705 4624 scope.go:117] "RemoveContainer" containerID="f5953297123ccc542150ae67506d579173e5a35b46ccb4d7e772071560b4def9" Oct 08 15:18:54 crc kubenswrapper[4624]: I1008 15:18:54.695607 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gdp9b"] Oct 08 15:18:54 crc kubenswrapper[4624]: E1008 15:18:54.697237 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413d973e-42e8-4d9f-b5ca-1d093047abfa" containerName="collect-profiles" Oct 08 15:18:54 crc kubenswrapper[4624]: I1008 15:18:54.697257 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="413d973e-42e8-4d9f-b5ca-1d093047abfa" containerName="collect-profiles" Oct 08 15:18:54 crc kubenswrapper[4624]: I1008 15:18:54.697473 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="413d973e-42e8-4d9f-b5ca-1d093047abfa" containerName="collect-profiles" Oct 08 15:18:54 crc kubenswrapper[4624]: I1008 15:18:54.700438 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdp9b" Oct 08 15:18:54 crc kubenswrapper[4624]: I1008 15:18:54.791987 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdp9b"] Oct 08 15:18:54 crc kubenswrapper[4624]: I1008 15:18:54.807761 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d746ff3a-6adf-490a-a8fc-a2cbf477ff25-catalog-content\") pod \"community-operators-gdp9b\" (UID: \"d746ff3a-6adf-490a-a8fc-a2cbf477ff25\") " pod="openshift-marketplace/community-operators-gdp9b" Oct 08 15:18:54 crc kubenswrapper[4624]: I1008 15:18:54.808144 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-629b6\" (UniqueName: \"kubernetes.io/projected/d746ff3a-6adf-490a-a8fc-a2cbf477ff25-kube-api-access-629b6\") pod \"community-operators-gdp9b\" (UID: \"d746ff3a-6adf-490a-a8fc-a2cbf477ff25\") " pod="openshift-marketplace/community-operators-gdp9b" Oct 08 15:18:54 crc kubenswrapper[4624]: I1008 15:18:54.808258 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d746ff3a-6adf-490a-a8fc-a2cbf477ff25-utilities\") pod \"community-operators-gdp9b\" (UID: \"d746ff3a-6adf-490a-a8fc-a2cbf477ff25\") " pod="openshift-marketplace/community-operators-gdp9b" Oct 08 15:18:54 crc kubenswrapper[4624]: I1008 15:18:54.910861 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d746ff3a-6adf-490a-a8fc-a2cbf477ff25-catalog-content\") pod \"community-operators-gdp9b\" (UID: \"d746ff3a-6adf-490a-a8fc-a2cbf477ff25\") " pod="openshift-marketplace/community-operators-gdp9b" Oct 08 15:18:54 crc kubenswrapper[4624]: I1008 15:18:54.911049 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-629b6\" (UniqueName: \"kubernetes.io/projected/d746ff3a-6adf-490a-a8fc-a2cbf477ff25-kube-api-access-629b6\") pod \"community-operators-gdp9b\" (UID: \"d746ff3a-6adf-490a-a8fc-a2cbf477ff25\") " pod="openshift-marketplace/community-operators-gdp9b" Oct 08 15:18:54 crc kubenswrapper[4624]: I1008 15:18:54.911087 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d746ff3a-6adf-490a-a8fc-a2cbf477ff25-utilities\") pod \"community-operators-gdp9b\" (UID: \"d746ff3a-6adf-490a-a8fc-a2cbf477ff25\") " pod="openshift-marketplace/community-operators-gdp9b" Oct 08 15:18:54 crc kubenswrapper[4624]: I1008 15:18:54.913700 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d746ff3a-6adf-490a-a8fc-a2cbf477ff25-utilities\") pod \"community-operators-gdp9b\" (UID: \"d746ff3a-6adf-490a-a8fc-a2cbf477ff25\") " pod="openshift-marketplace/community-operators-gdp9b" Oct 08 15:18:54 crc kubenswrapper[4624]: I1008 15:18:54.914906 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d746ff3a-6adf-490a-a8fc-a2cbf477ff25-catalog-content\") pod \"community-operators-gdp9b\" (UID: \"d746ff3a-6adf-490a-a8fc-a2cbf477ff25\") " pod="openshift-marketplace/community-operators-gdp9b" Oct 08 15:18:54 crc kubenswrapper[4624]: I1008 15:18:54.940164 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-629b6\" (UniqueName: \"kubernetes.io/projected/d746ff3a-6adf-490a-a8fc-a2cbf477ff25-kube-api-access-629b6\") pod \"community-operators-gdp9b\" (UID: \"d746ff3a-6adf-490a-a8fc-a2cbf477ff25\") " pod="openshift-marketplace/community-operators-gdp9b" Oct 08 15:18:55 crc kubenswrapper[4624]: I1008 15:18:55.027832 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdp9b" Oct 08 15:18:56 crc kubenswrapper[4624]: I1008 15:18:56.598425 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdp9b"] Oct 08 15:18:56 crc kubenswrapper[4624]: I1008 15:18:56.950735 4624 generic.go:334] "Generic (PLEG): container finished" podID="d746ff3a-6adf-490a-a8fc-a2cbf477ff25" containerID="78d0220e84abde6ecc3a8ef49c4f43587ea2d15330a8ff8f99dc95fc5009fbfc" exitCode=0 Oct 08 15:18:56 crc kubenswrapper[4624]: I1008 15:18:56.950992 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdp9b" event={"ID":"d746ff3a-6adf-490a-a8fc-a2cbf477ff25","Type":"ContainerDied","Data":"78d0220e84abde6ecc3a8ef49c4f43587ea2d15330a8ff8f99dc95fc5009fbfc"} Oct 08 15:18:56 crc kubenswrapper[4624]: I1008 15:18:56.951127 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdp9b" event={"ID":"d746ff3a-6adf-490a-a8fc-a2cbf477ff25","Type":"ContainerStarted","Data":"19fe26be6c4d9af504c3db6e2a8ef0564dc55e901652c4c9004047c63e8a8d4b"} Oct 08 15:18:56 crc kubenswrapper[4624]: I1008 15:18:56.954093 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 15:19:08 crc kubenswrapper[4624]: I1008 15:19:08.060648 4624 generic.go:334] "Generic (PLEG): container finished" podID="d746ff3a-6adf-490a-a8fc-a2cbf477ff25" containerID="46394782b4084245845d379f7661af4c5cd63aade7dd60615a9383045f2282bd" exitCode=0 Oct 08 15:19:08 crc kubenswrapper[4624]: I1008 15:19:08.060729 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdp9b" event={"ID":"d746ff3a-6adf-490a-a8fc-a2cbf477ff25","Type":"ContainerDied","Data":"46394782b4084245845d379f7661af4c5cd63aade7dd60615a9383045f2282bd"} Oct 08 15:19:10 crc kubenswrapper[4624]: I1008 15:19:10.081161 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdp9b" event={"ID":"d746ff3a-6adf-490a-a8fc-a2cbf477ff25","Type":"ContainerStarted","Data":"37d9823b95f6a946fbabbd26443be94eb9d956628b67ead8eaca7ab8936715b0"} Oct 08 15:19:10 crc kubenswrapper[4624]: I1008 15:19:10.104911 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gdp9b" podStartSLOduration=4.006206788 podStartE2EDuration="16.104889933s" podCreationTimestamp="2025-10-08 15:18:54 +0000 UTC" firstStartedPulling="2025-10-08 15:18:56.952920486 +0000 UTC m=+3362.103855563" lastFinishedPulling="2025-10-08 15:19:09.051603631 +0000 UTC m=+3374.202538708" observedRunningTime="2025-10-08 15:19:10.101496412 +0000 UTC m=+3375.252431499" watchObservedRunningTime="2025-10-08 15:19:10.104889933 +0000 UTC m=+3375.255825010" Oct 08 15:19:15 crc kubenswrapper[4624]: I1008 15:19:15.028685 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gdp9b" Oct 08 15:19:15 crc kubenswrapper[4624]: I1008 15:19:15.029252 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gdp9b" Oct 08 15:19:15 crc kubenswrapper[4624]: I1008 15:19:15.080308 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gdp9b" Oct 08 15:19:15 crc kubenswrapper[4624]: I1008 15:19:15.169468 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gdp9b" Oct 08 15:19:15 crc kubenswrapper[4624]: I1008 15:19:15.299696 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdp9b"] Oct 08 15:19:15 crc kubenswrapper[4624]: I1008 15:19:15.368232 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s82n5"] Oct 08 15:19:15 crc kubenswrapper[4624]: I1008 15:19:15.376346 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s82n5" podUID="6784a613-240f-4d11-9b3c-bf9f99a646e6" containerName="registry-server" containerID="cri-o://983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868" gracePeriod=2 Oct 08 15:19:15 crc kubenswrapper[4624]: E1008 15:19:15.790919 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868 is running failed: container process not found" containerID="983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868" cmd=["grpc_health_probe","-addr=:50051"] Oct 08 15:19:15 crc kubenswrapper[4624]: E1008 15:19:15.791268 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868 is running failed: container process not found" containerID="983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868" cmd=["grpc_health_probe","-addr=:50051"] Oct 08 15:19:15 crc kubenswrapper[4624]: E1008 15:19:15.791815 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868 is running failed: container process not found" containerID="983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868" cmd=["grpc_health_probe","-addr=:50051"] Oct 08 15:19:15 crc kubenswrapper[4624]: E1008 15:19:15.791851 4624 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-s82n5" podUID="6784a613-240f-4d11-9b3c-bf9f99a646e6" containerName="registry-server" Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.133846 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s82n5" Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.151473 4624 generic.go:334] "Generic (PLEG): container finished" podID="6784a613-240f-4d11-9b3c-bf9f99a646e6" containerID="983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868" exitCode=0 Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.152787 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s82n5" Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.152958 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s82n5" event={"ID":"6784a613-240f-4d11-9b3c-bf9f99a646e6","Type":"ContainerDied","Data":"983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868"} Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.153059 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s82n5" event={"ID":"6784a613-240f-4d11-9b3c-bf9f99a646e6","Type":"ContainerDied","Data":"7daa6273bd66a7655d09139bda245858b9a62b47db01ad78903c273f88104ba5"} Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.153150 4624 scope.go:117] "RemoveContainer" containerID="983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868" Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.159205 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpn8l\" (UniqueName: \"kubernetes.io/projected/6784a613-240f-4d11-9b3c-bf9f99a646e6-kube-api-access-hpn8l\") pod \"6784a613-240f-4d11-9b3c-bf9f99a646e6\" (UID: \"6784a613-240f-4d11-9b3c-bf9f99a646e6\") " Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.159523 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6784a613-240f-4d11-9b3c-bf9f99a646e6-utilities\") pod \"6784a613-240f-4d11-9b3c-bf9f99a646e6\" (UID: \"6784a613-240f-4d11-9b3c-bf9f99a646e6\") " Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.159736 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6784a613-240f-4d11-9b3c-bf9f99a646e6-catalog-content\") pod \"6784a613-240f-4d11-9b3c-bf9f99a646e6\" (UID: \"6784a613-240f-4d11-9b3c-bf9f99a646e6\") " Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.162559 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6784a613-240f-4d11-9b3c-bf9f99a646e6-utilities" (OuterVolumeSpecName: "utilities") pod "6784a613-240f-4d11-9b3c-bf9f99a646e6" (UID: "6784a613-240f-4d11-9b3c-bf9f99a646e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.216799 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6784a613-240f-4d11-9b3c-bf9f99a646e6-kube-api-access-hpn8l" (OuterVolumeSpecName: "kube-api-access-hpn8l") pod "6784a613-240f-4d11-9b3c-bf9f99a646e6" (UID: "6784a613-240f-4d11-9b3c-bf9f99a646e6"). InnerVolumeSpecName "kube-api-access-hpn8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.266591 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6784a613-240f-4d11-9b3c-bf9f99a646e6-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.266815 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpn8l\" (UniqueName: \"kubernetes.io/projected/6784a613-240f-4d11-9b3c-bf9f99a646e6-kube-api-access-hpn8l\") on node \"crc\" DevicePath \"\"" Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.279810 4624 scope.go:117] "RemoveContainer" containerID="72f6c43a487ba415858cbfb52ed91f9d4d168dd4c56a4419d00e348cab16469b" Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.284276 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6784a613-240f-4d11-9b3c-bf9f99a646e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6784a613-240f-4d11-9b3c-bf9f99a646e6" (UID: "6784a613-240f-4d11-9b3c-bf9f99a646e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.330722 4624 scope.go:117] "RemoveContainer" containerID="d8c7297ac3c3a4066f3b42cf445079321f3fcc158b69bf1c14c214653d9f2fb5" Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.357718 4624 scope.go:117] "RemoveContainer" containerID="983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868" Oct 08 15:19:16 crc kubenswrapper[4624]: E1008 15:19:16.358864 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868\": container with ID starting with 983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868 not found: ID does not exist" containerID="983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868" Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.359734 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868"} err="failed to get container status \"983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868\": rpc error: code = NotFound desc = could not find container \"983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868\": container with ID starting with 983d57ab971666d2743830b02cf624efe3c0c1663e23be03bcb8e3168b02a868 not found: ID does not exist" Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.359790 4624 scope.go:117] "RemoveContainer" containerID="72f6c43a487ba415858cbfb52ed91f9d4d168dd4c56a4419d00e348cab16469b" Oct 08 15:19:16 crc kubenswrapper[4624]: E1008 15:19:16.360599 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72f6c43a487ba415858cbfb52ed91f9d4d168dd4c56a4419d00e348cab16469b\": container with ID starting with 72f6c43a487ba415858cbfb52ed91f9d4d168dd4c56a4419d00e348cab16469b not found: ID does not exist" containerID="72f6c43a487ba415858cbfb52ed91f9d4d168dd4c56a4419d00e348cab16469b" Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.360662 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72f6c43a487ba415858cbfb52ed91f9d4d168dd4c56a4419d00e348cab16469b"} err="failed to get container status \"72f6c43a487ba415858cbfb52ed91f9d4d168dd4c56a4419d00e348cab16469b\": rpc error: code = NotFound desc = could not find container \"72f6c43a487ba415858cbfb52ed91f9d4d168dd4c56a4419d00e348cab16469b\": container with ID starting with 72f6c43a487ba415858cbfb52ed91f9d4d168dd4c56a4419d00e348cab16469b not found: ID does not exist" Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.360697 4624 scope.go:117] "RemoveContainer" containerID="d8c7297ac3c3a4066f3b42cf445079321f3fcc158b69bf1c14c214653d9f2fb5" Oct 08 15:19:16 crc kubenswrapper[4624]: E1008 15:19:16.361067 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8c7297ac3c3a4066f3b42cf445079321f3fcc158b69bf1c14c214653d9f2fb5\": container with ID starting with d8c7297ac3c3a4066f3b42cf445079321f3fcc158b69bf1c14c214653d9f2fb5 not found: ID does not exist" containerID="d8c7297ac3c3a4066f3b42cf445079321f3fcc158b69bf1c14c214653d9f2fb5" Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.361104 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8c7297ac3c3a4066f3b42cf445079321f3fcc158b69bf1c14c214653d9f2fb5"} err="failed to get container status \"d8c7297ac3c3a4066f3b42cf445079321f3fcc158b69bf1c14c214653d9f2fb5\": rpc error: code = NotFound desc = could not find container \"d8c7297ac3c3a4066f3b42cf445079321f3fcc158b69bf1c14c214653d9f2fb5\": container with ID starting with d8c7297ac3c3a4066f3b42cf445079321f3fcc158b69bf1c14c214653d9f2fb5 not found: ID does not exist" Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.369015 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6784a613-240f-4d11-9b3c-bf9f99a646e6-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.490574 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s82n5"] Oct 08 15:19:16 crc kubenswrapper[4624]: I1008 15:19:16.508034 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s82n5"] Oct 08 15:19:17 crc kubenswrapper[4624]: I1008 15:19:17.478929 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6784a613-240f-4d11-9b3c-bf9f99a646e6" path="/var/lib/kubelet/pods/6784a613-240f-4d11-9b3c-bf9f99a646e6/volumes" Oct 08 15:19:30 crc kubenswrapper[4624]: I1008 15:19:30.076130 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:19:30 crc kubenswrapper[4624]: I1008 15:19:30.081966 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:19:51 crc kubenswrapper[4624]: I1008 15:19:51.650115 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cmh7x"] Oct 08 15:19:51 crc kubenswrapper[4624]: E1008 15:19:51.651060 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6784a613-240f-4d11-9b3c-bf9f99a646e6" containerName="extract-content" Oct 08 15:19:51 crc kubenswrapper[4624]: I1008 15:19:51.651074 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6784a613-240f-4d11-9b3c-bf9f99a646e6" containerName="extract-content" Oct 08 15:19:51 crc kubenswrapper[4624]: E1008 15:19:51.651093 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6784a613-240f-4d11-9b3c-bf9f99a646e6" containerName="extract-utilities" Oct 08 15:19:51 crc kubenswrapper[4624]: I1008 15:19:51.651100 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6784a613-240f-4d11-9b3c-bf9f99a646e6" containerName="extract-utilities" Oct 08 15:19:51 crc kubenswrapper[4624]: E1008 15:19:51.651123 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6784a613-240f-4d11-9b3c-bf9f99a646e6" containerName="registry-server" Oct 08 15:19:51 crc kubenswrapper[4624]: I1008 15:19:51.651134 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6784a613-240f-4d11-9b3c-bf9f99a646e6" containerName="registry-server" Oct 08 15:19:51 crc kubenswrapper[4624]: I1008 15:19:51.651374 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="6784a613-240f-4d11-9b3c-bf9f99a646e6" containerName="registry-server" Oct 08 15:19:51 crc kubenswrapper[4624]: I1008 15:19:51.682484 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmh7x" Oct 08 15:19:51 crc kubenswrapper[4624]: I1008 15:19:51.725396 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cmh7x"] Oct 08 15:19:51 crc kubenswrapper[4624]: I1008 15:19:51.859463 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f077e02b-b409-4563-886d-7418dadabbae-catalog-content\") pod \"redhat-operators-cmh7x\" (UID: \"f077e02b-b409-4563-886d-7418dadabbae\") " pod="openshift-marketplace/redhat-operators-cmh7x" Oct 08 15:19:51 crc kubenswrapper[4624]: I1008 15:19:51.859813 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwwsh\" (UniqueName: \"kubernetes.io/projected/f077e02b-b409-4563-886d-7418dadabbae-kube-api-access-hwwsh\") pod \"redhat-operators-cmh7x\" (UID: \"f077e02b-b409-4563-886d-7418dadabbae\") " pod="openshift-marketplace/redhat-operators-cmh7x" Oct 08 15:19:51 crc kubenswrapper[4624]: I1008 15:19:51.859905 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f077e02b-b409-4563-886d-7418dadabbae-utilities\") pod \"redhat-operators-cmh7x\" (UID: \"f077e02b-b409-4563-886d-7418dadabbae\") " pod="openshift-marketplace/redhat-operators-cmh7x" Oct 08 15:19:51 crc kubenswrapper[4624]: I1008 15:19:51.961177 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f077e02b-b409-4563-886d-7418dadabbae-catalog-content\") pod \"redhat-operators-cmh7x\" (UID: \"f077e02b-b409-4563-886d-7418dadabbae\") " pod="openshift-marketplace/redhat-operators-cmh7x" Oct 08 15:19:51 crc kubenswrapper[4624]: I1008 15:19:51.961288 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwwsh\" (UniqueName: \"kubernetes.io/projected/f077e02b-b409-4563-886d-7418dadabbae-kube-api-access-hwwsh\") pod \"redhat-operators-cmh7x\" (UID: \"f077e02b-b409-4563-886d-7418dadabbae\") " pod="openshift-marketplace/redhat-operators-cmh7x" Oct 08 15:19:51 crc kubenswrapper[4624]: I1008 15:19:51.961318 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f077e02b-b409-4563-886d-7418dadabbae-utilities\") pod \"redhat-operators-cmh7x\" (UID: \"f077e02b-b409-4563-886d-7418dadabbae\") " pod="openshift-marketplace/redhat-operators-cmh7x" Oct 08 15:19:51 crc kubenswrapper[4624]: I1008 15:19:51.961712 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f077e02b-b409-4563-886d-7418dadabbae-catalog-content\") pod \"redhat-operators-cmh7x\" (UID: \"f077e02b-b409-4563-886d-7418dadabbae\") " pod="openshift-marketplace/redhat-operators-cmh7x" Oct 08 15:19:51 crc kubenswrapper[4624]: I1008 15:19:51.961726 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f077e02b-b409-4563-886d-7418dadabbae-utilities\") pod \"redhat-operators-cmh7x\" (UID: \"f077e02b-b409-4563-886d-7418dadabbae\") " pod="openshift-marketplace/redhat-operators-cmh7x" Oct 08 15:19:51 crc kubenswrapper[4624]: I1008 15:19:51.980332 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwwsh\" (UniqueName: \"kubernetes.io/projected/f077e02b-b409-4563-886d-7418dadabbae-kube-api-access-hwwsh\") pod \"redhat-operators-cmh7x\" (UID: \"f077e02b-b409-4563-886d-7418dadabbae\") " pod="openshift-marketplace/redhat-operators-cmh7x" Oct 08 15:19:52 crc kubenswrapper[4624]: I1008 15:19:52.002311 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmh7x" Oct 08 15:19:52 crc kubenswrapper[4624]: I1008 15:19:52.592246 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cmh7x"] Oct 08 15:19:53 crc kubenswrapper[4624]: I1008 15:19:53.485766 4624 generic.go:334] "Generic (PLEG): container finished" podID="f077e02b-b409-4563-886d-7418dadabbae" containerID="10527b8d7a38387c84cb9b0f2b8d373b6a2ac3ec8533d22e2dd7277c58fb3fa8" exitCode=0 Oct 08 15:19:53 crc kubenswrapper[4624]: I1008 15:19:53.485872 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmh7x" event={"ID":"f077e02b-b409-4563-886d-7418dadabbae","Type":"ContainerDied","Data":"10527b8d7a38387c84cb9b0f2b8d373b6a2ac3ec8533d22e2dd7277c58fb3fa8"} Oct 08 15:19:53 crc kubenswrapper[4624]: I1008 15:19:53.486532 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmh7x" event={"ID":"f077e02b-b409-4563-886d-7418dadabbae","Type":"ContainerStarted","Data":"2c5759a968cb3021df1293c39207e78aa0a2903201ae576a2ea76bca14f3519b"} Oct 08 15:19:55 crc kubenswrapper[4624]: I1008 15:19:55.515693 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmh7x" event={"ID":"f077e02b-b409-4563-886d-7418dadabbae","Type":"ContainerStarted","Data":"9cac07fb474537860a787f80077b154c00172dac1f1a3b9087f3ed69bbb155cf"} Oct 08 15:20:00 crc kubenswrapper[4624]: I1008 15:20:00.076927 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:20:00 crc kubenswrapper[4624]: I1008 15:20:00.077497 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:20:00 crc kubenswrapper[4624]: I1008 15:20:00.557342 4624 generic.go:334] "Generic (PLEG): container finished" podID="f077e02b-b409-4563-886d-7418dadabbae" containerID="9cac07fb474537860a787f80077b154c00172dac1f1a3b9087f3ed69bbb155cf" exitCode=0 Oct 08 15:20:00 crc kubenswrapper[4624]: I1008 15:20:00.557406 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmh7x" event={"ID":"f077e02b-b409-4563-886d-7418dadabbae","Type":"ContainerDied","Data":"9cac07fb474537860a787f80077b154c00172dac1f1a3b9087f3ed69bbb155cf"} Oct 08 15:20:01 crc kubenswrapper[4624]: I1008 15:20:01.583090 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmh7x" event={"ID":"f077e02b-b409-4563-886d-7418dadabbae","Type":"ContainerStarted","Data":"94888792ef0b1d1b3d52a607c37e9ff0777ef6dc51dbfaf702ecf7abbb700f34"} Oct 08 15:20:02 crc kubenswrapper[4624]: I1008 15:20:02.003223 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cmh7x" Oct 08 15:20:02 crc kubenswrapper[4624]: I1008 15:20:02.003618 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cmh7x" Oct 08 15:20:03 crc kubenswrapper[4624]: I1008 15:20:03.061065 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cmh7x" podUID="f077e02b-b409-4563-886d-7418dadabbae" containerName="registry-server" probeResult="failure" output=< Oct 08 15:20:03 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:20:03 crc kubenswrapper[4624]: > Oct 08 15:20:13 crc kubenswrapper[4624]: I1008 15:20:13.057537 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cmh7x" podUID="f077e02b-b409-4563-886d-7418dadabbae" containerName="registry-server" probeResult="failure" output=< Oct 08 15:20:13 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:20:13 crc kubenswrapper[4624]: > Oct 08 15:20:23 crc kubenswrapper[4624]: I1008 15:20:23.049755 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cmh7x" podUID="f077e02b-b409-4563-886d-7418dadabbae" containerName="registry-server" probeResult="failure" output=< Oct 08 15:20:23 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:20:23 crc kubenswrapper[4624]: > Oct 08 15:20:30 crc kubenswrapper[4624]: I1008 15:20:30.076366 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:20:30 crc kubenswrapper[4624]: I1008 15:20:30.076997 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:20:30 crc kubenswrapper[4624]: I1008 15:20:30.077050 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 15:20:30 crc kubenswrapper[4624]: I1008 15:20:30.077944 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b2a3b8e8854c1eef3db1ce6eb26bff455ba6daf2a25aafe06bd729d74f88ee2a"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 15:20:30 crc kubenswrapper[4624]: I1008 15:20:30.078116 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://b2a3b8e8854c1eef3db1ce6eb26bff455ba6daf2a25aafe06bd729d74f88ee2a" gracePeriod=600 Oct 08 15:20:30 crc kubenswrapper[4624]: I1008 15:20:30.867660 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="b2a3b8e8854c1eef3db1ce6eb26bff455ba6daf2a25aafe06bd729d74f88ee2a" exitCode=0 Oct 08 15:20:30 crc kubenswrapper[4624]: I1008 15:20:30.867755 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"b2a3b8e8854c1eef3db1ce6eb26bff455ba6daf2a25aafe06bd729d74f88ee2a"} Oct 08 15:20:30 crc kubenswrapper[4624]: I1008 15:20:30.867966 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df"} Oct 08 15:20:30 crc kubenswrapper[4624]: I1008 15:20:30.867993 4624 scope.go:117] "RemoveContainer" containerID="c8426a5d4418008188635c191813d8b22289a4ef92b1d9184c4bccfec0f1c189" Oct 08 15:20:30 crc kubenswrapper[4624]: I1008 15:20:30.900066 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cmh7x" podStartSLOduration=32.11304378 podStartE2EDuration="39.900042849s" podCreationTimestamp="2025-10-08 15:19:51 +0000 UTC" firstStartedPulling="2025-10-08 15:19:53.489605673 +0000 UTC m=+3418.640540750" lastFinishedPulling="2025-10-08 15:20:01.276604742 +0000 UTC m=+3426.427539819" observedRunningTime="2025-10-08 15:20:01.615937399 +0000 UTC m=+3426.766872476" watchObservedRunningTime="2025-10-08 15:20:30.900042849 +0000 UTC m=+3456.050977926" Oct 08 15:20:33 crc kubenswrapper[4624]: I1008 15:20:33.070438 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cmh7x" podUID="f077e02b-b409-4563-886d-7418dadabbae" containerName="registry-server" probeResult="failure" output=< Oct 08 15:20:33 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:20:33 crc kubenswrapper[4624]: > Oct 08 15:20:42 crc kubenswrapper[4624]: I1008 15:20:42.048539 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cmh7x" Oct 08 15:20:42 crc kubenswrapper[4624]: I1008 15:20:42.102457 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cmh7x" Oct 08 15:20:42 crc kubenswrapper[4624]: I1008 15:20:42.291334 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cmh7x"] Oct 08 15:20:44 crc kubenswrapper[4624]: I1008 15:20:44.005832 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cmh7x" podUID="f077e02b-b409-4563-886d-7418dadabbae" containerName="registry-server" containerID="cri-o://94888792ef0b1d1b3d52a607c37e9ff0777ef6dc51dbfaf702ecf7abbb700f34" gracePeriod=2 Oct 08 15:20:44 crc kubenswrapper[4624]: E1008 15:20:44.337332 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf077e02b_b409_4563_886d_7418dadabbae.slice/crio-conmon-94888792ef0b1d1b3d52a607c37e9ff0777ef6dc51dbfaf702ecf7abbb700f34.scope\": RecentStats: unable to find data in memory cache]" Oct 08 15:20:44 crc kubenswrapper[4624]: I1008 15:20:44.615106 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmh7x" Oct 08 15:20:44 crc kubenswrapper[4624]: I1008 15:20:44.629479 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f077e02b-b409-4563-886d-7418dadabbae-catalog-content\") pod \"f077e02b-b409-4563-886d-7418dadabbae\" (UID: \"f077e02b-b409-4563-886d-7418dadabbae\") " Oct 08 15:20:44 crc kubenswrapper[4624]: I1008 15:20:44.629533 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwwsh\" (UniqueName: \"kubernetes.io/projected/f077e02b-b409-4563-886d-7418dadabbae-kube-api-access-hwwsh\") pod \"f077e02b-b409-4563-886d-7418dadabbae\" (UID: \"f077e02b-b409-4563-886d-7418dadabbae\") " Oct 08 15:20:44 crc kubenswrapper[4624]: I1008 15:20:44.629580 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f077e02b-b409-4563-886d-7418dadabbae-utilities\") pod \"f077e02b-b409-4563-886d-7418dadabbae\" (UID: \"f077e02b-b409-4563-886d-7418dadabbae\") " Oct 08 15:20:44 crc kubenswrapper[4624]: I1008 15:20:44.630475 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f077e02b-b409-4563-886d-7418dadabbae-utilities" (OuterVolumeSpecName: "utilities") pod "f077e02b-b409-4563-886d-7418dadabbae" (UID: "f077e02b-b409-4563-886d-7418dadabbae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:20:44 crc kubenswrapper[4624]: I1008 15:20:44.648356 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f077e02b-b409-4563-886d-7418dadabbae-kube-api-access-hwwsh" (OuterVolumeSpecName: "kube-api-access-hwwsh") pod "f077e02b-b409-4563-886d-7418dadabbae" (UID: "f077e02b-b409-4563-886d-7418dadabbae"). InnerVolumeSpecName "kube-api-access-hwwsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:20:44 crc kubenswrapper[4624]: I1008 15:20:44.731495 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f077e02b-b409-4563-886d-7418dadabbae-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:20:44 crc kubenswrapper[4624]: I1008 15:20:44.731748 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwwsh\" (UniqueName: \"kubernetes.io/projected/f077e02b-b409-4563-886d-7418dadabbae-kube-api-access-hwwsh\") on node \"crc\" DevicePath \"\"" Oct 08 15:20:44 crc kubenswrapper[4624]: I1008 15:20:44.731791 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f077e02b-b409-4563-886d-7418dadabbae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f077e02b-b409-4563-886d-7418dadabbae" (UID: "f077e02b-b409-4563-886d-7418dadabbae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:20:44 crc kubenswrapper[4624]: I1008 15:20:44.833019 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f077e02b-b409-4563-886d-7418dadabbae-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:20:45 crc kubenswrapper[4624]: I1008 15:20:45.022717 4624 generic.go:334] "Generic (PLEG): container finished" podID="f077e02b-b409-4563-886d-7418dadabbae" containerID="94888792ef0b1d1b3d52a607c37e9ff0777ef6dc51dbfaf702ecf7abbb700f34" exitCode=0 Oct 08 15:20:45 crc kubenswrapper[4624]: I1008 15:20:45.022785 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmh7x" event={"ID":"f077e02b-b409-4563-886d-7418dadabbae","Type":"ContainerDied","Data":"94888792ef0b1d1b3d52a607c37e9ff0777ef6dc51dbfaf702ecf7abbb700f34"} Oct 08 15:20:45 crc kubenswrapper[4624]: I1008 15:20:45.022824 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmh7x" event={"ID":"f077e02b-b409-4563-886d-7418dadabbae","Type":"ContainerDied","Data":"2c5759a968cb3021df1293c39207e78aa0a2903201ae576a2ea76bca14f3519b"} Oct 08 15:20:45 crc kubenswrapper[4624]: I1008 15:20:45.022846 4624 scope.go:117] "RemoveContainer" containerID="94888792ef0b1d1b3d52a607c37e9ff0777ef6dc51dbfaf702ecf7abbb700f34" Oct 08 15:20:45 crc kubenswrapper[4624]: I1008 15:20:45.022866 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmh7x" Oct 08 15:20:45 crc kubenswrapper[4624]: I1008 15:20:45.072833 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cmh7x"] Oct 08 15:20:45 crc kubenswrapper[4624]: I1008 15:20:45.081822 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cmh7x"] Oct 08 15:20:45 crc kubenswrapper[4624]: I1008 15:20:45.086047 4624 scope.go:117] "RemoveContainer" containerID="9cac07fb474537860a787f80077b154c00172dac1f1a3b9087f3ed69bbb155cf" Oct 08 15:20:45 crc kubenswrapper[4624]: I1008 15:20:45.109529 4624 scope.go:117] "RemoveContainer" containerID="10527b8d7a38387c84cb9b0f2b8d373b6a2ac3ec8533d22e2dd7277c58fb3fa8" Oct 08 15:20:45 crc kubenswrapper[4624]: I1008 15:20:45.160479 4624 scope.go:117] "RemoveContainer" containerID="94888792ef0b1d1b3d52a607c37e9ff0777ef6dc51dbfaf702ecf7abbb700f34" Oct 08 15:20:45 crc kubenswrapper[4624]: E1008 15:20:45.161381 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94888792ef0b1d1b3d52a607c37e9ff0777ef6dc51dbfaf702ecf7abbb700f34\": container with ID starting with 94888792ef0b1d1b3d52a607c37e9ff0777ef6dc51dbfaf702ecf7abbb700f34 not found: ID does not exist" containerID="94888792ef0b1d1b3d52a607c37e9ff0777ef6dc51dbfaf702ecf7abbb700f34" Oct 08 15:20:45 crc kubenswrapper[4624]: I1008 15:20:45.161446 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94888792ef0b1d1b3d52a607c37e9ff0777ef6dc51dbfaf702ecf7abbb700f34"} err="failed to get container status \"94888792ef0b1d1b3d52a607c37e9ff0777ef6dc51dbfaf702ecf7abbb700f34\": rpc error: code = NotFound desc = could not find container \"94888792ef0b1d1b3d52a607c37e9ff0777ef6dc51dbfaf702ecf7abbb700f34\": container with ID starting with 94888792ef0b1d1b3d52a607c37e9ff0777ef6dc51dbfaf702ecf7abbb700f34 not found: ID does not exist" Oct 08 15:20:45 crc kubenswrapper[4624]: I1008 15:20:45.161501 4624 scope.go:117] "RemoveContainer" containerID="9cac07fb474537860a787f80077b154c00172dac1f1a3b9087f3ed69bbb155cf" Oct 08 15:20:45 crc kubenswrapper[4624]: E1008 15:20:45.163105 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cac07fb474537860a787f80077b154c00172dac1f1a3b9087f3ed69bbb155cf\": container with ID starting with 9cac07fb474537860a787f80077b154c00172dac1f1a3b9087f3ed69bbb155cf not found: ID does not exist" containerID="9cac07fb474537860a787f80077b154c00172dac1f1a3b9087f3ed69bbb155cf" Oct 08 15:20:45 crc kubenswrapper[4624]: I1008 15:20:45.163208 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cac07fb474537860a787f80077b154c00172dac1f1a3b9087f3ed69bbb155cf"} err="failed to get container status \"9cac07fb474537860a787f80077b154c00172dac1f1a3b9087f3ed69bbb155cf\": rpc error: code = NotFound desc = could not find container \"9cac07fb474537860a787f80077b154c00172dac1f1a3b9087f3ed69bbb155cf\": container with ID starting with 9cac07fb474537860a787f80077b154c00172dac1f1a3b9087f3ed69bbb155cf not found: ID does not exist" Oct 08 15:20:45 crc kubenswrapper[4624]: I1008 15:20:45.163296 4624 scope.go:117] "RemoveContainer" containerID="10527b8d7a38387c84cb9b0f2b8d373b6a2ac3ec8533d22e2dd7277c58fb3fa8" Oct 08 15:20:45 crc kubenswrapper[4624]: E1008 15:20:45.163753 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10527b8d7a38387c84cb9b0f2b8d373b6a2ac3ec8533d22e2dd7277c58fb3fa8\": container with ID starting with 10527b8d7a38387c84cb9b0f2b8d373b6a2ac3ec8533d22e2dd7277c58fb3fa8 not found: ID does not exist" containerID="10527b8d7a38387c84cb9b0f2b8d373b6a2ac3ec8533d22e2dd7277c58fb3fa8" Oct 08 15:20:45 crc kubenswrapper[4624]: I1008 15:20:45.163848 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10527b8d7a38387c84cb9b0f2b8d373b6a2ac3ec8533d22e2dd7277c58fb3fa8"} err="failed to get container status \"10527b8d7a38387c84cb9b0f2b8d373b6a2ac3ec8533d22e2dd7277c58fb3fa8\": rpc error: code = NotFound desc = could not find container \"10527b8d7a38387c84cb9b0f2b8d373b6a2ac3ec8533d22e2dd7277c58fb3fa8\": container with ID starting with 10527b8d7a38387c84cb9b0f2b8d373b6a2ac3ec8533d22e2dd7277c58fb3fa8 not found: ID does not exist" Oct 08 15:20:45 crc kubenswrapper[4624]: I1008 15:20:45.479115 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f077e02b-b409-4563-886d-7418dadabbae" path="/var/lib/kubelet/pods/f077e02b-b409-4563-886d-7418dadabbae/volumes" Oct 08 15:22:30 crc kubenswrapper[4624]: I1008 15:22:30.075933 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:22:30 crc kubenswrapper[4624]: I1008 15:22:30.076531 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:22:54 crc kubenswrapper[4624]: I1008 15:22:54.789982 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-blsq6"] Oct 08 15:22:54 crc kubenswrapper[4624]: E1008 15:22:54.791566 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f077e02b-b409-4563-886d-7418dadabbae" containerName="extract-content" Oct 08 15:22:54 crc kubenswrapper[4624]: I1008 15:22:54.791588 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f077e02b-b409-4563-886d-7418dadabbae" containerName="extract-content" Oct 08 15:22:54 crc kubenswrapper[4624]: E1008 15:22:54.791599 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f077e02b-b409-4563-886d-7418dadabbae" containerName="extract-utilities" Oct 08 15:22:54 crc kubenswrapper[4624]: I1008 15:22:54.791606 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f077e02b-b409-4563-886d-7418dadabbae" containerName="extract-utilities" Oct 08 15:22:54 crc kubenswrapper[4624]: E1008 15:22:54.791709 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f077e02b-b409-4563-886d-7418dadabbae" containerName="registry-server" Oct 08 15:22:54 crc kubenswrapper[4624]: I1008 15:22:54.791718 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f077e02b-b409-4563-886d-7418dadabbae" containerName="registry-server" Oct 08 15:22:54 crc kubenswrapper[4624]: I1008 15:22:54.791915 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f077e02b-b409-4563-886d-7418dadabbae" containerName="registry-server" Oct 08 15:22:54 crc kubenswrapper[4624]: I1008 15:22:54.793848 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-blsq6" Oct 08 15:22:54 crc kubenswrapper[4624]: I1008 15:22:54.806285 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-blsq6"] Oct 08 15:22:54 crc kubenswrapper[4624]: I1008 15:22:54.895127 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbbff273-31b6-4f25-bf0e-e90773982c9b-utilities\") pod \"certified-operators-blsq6\" (UID: \"bbbff273-31b6-4f25-bf0e-e90773982c9b\") " pod="openshift-marketplace/certified-operators-blsq6" Oct 08 15:22:54 crc kubenswrapper[4624]: I1008 15:22:54.895510 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svchx\" (UniqueName: \"kubernetes.io/projected/bbbff273-31b6-4f25-bf0e-e90773982c9b-kube-api-access-svchx\") pod \"certified-operators-blsq6\" (UID: \"bbbff273-31b6-4f25-bf0e-e90773982c9b\") " pod="openshift-marketplace/certified-operators-blsq6" Oct 08 15:22:54 crc kubenswrapper[4624]: I1008 15:22:54.895571 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbbff273-31b6-4f25-bf0e-e90773982c9b-catalog-content\") pod \"certified-operators-blsq6\" (UID: \"bbbff273-31b6-4f25-bf0e-e90773982c9b\") " pod="openshift-marketplace/certified-operators-blsq6" Oct 08 15:22:54 crc kubenswrapper[4624]: I1008 15:22:54.997960 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbbff273-31b6-4f25-bf0e-e90773982c9b-utilities\") pod \"certified-operators-blsq6\" (UID: \"bbbff273-31b6-4f25-bf0e-e90773982c9b\") " pod="openshift-marketplace/certified-operators-blsq6" Oct 08 15:22:54 crc kubenswrapper[4624]: I1008 15:22:54.998048 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svchx\" (UniqueName: \"kubernetes.io/projected/bbbff273-31b6-4f25-bf0e-e90773982c9b-kube-api-access-svchx\") pod \"certified-operators-blsq6\" (UID: \"bbbff273-31b6-4f25-bf0e-e90773982c9b\") " pod="openshift-marketplace/certified-operators-blsq6" Oct 08 15:22:54 crc kubenswrapper[4624]: I1008 15:22:54.998104 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbbff273-31b6-4f25-bf0e-e90773982c9b-catalog-content\") pod \"certified-operators-blsq6\" (UID: \"bbbff273-31b6-4f25-bf0e-e90773982c9b\") " pod="openshift-marketplace/certified-operators-blsq6" Oct 08 15:22:54 crc kubenswrapper[4624]: I1008 15:22:54.998610 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbbff273-31b6-4f25-bf0e-e90773982c9b-utilities\") pod \"certified-operators-blsq6\" (UID: \"bbbff273-31b6-4f25-bf0e-e90773982c9b\") " pod="openshift-marketplace/certified-operators-blsq6" Oct 08 15:22:54 crc kubenswrapper[4624]: I1008 15:22:54.998997 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbbff273-31b6-4f25-bf0e-e90773982c9b-catalog-content\") pod \"certified-operators-blsq6\" (UID: \"bbbff273-31b6-4f25-bf0e-e90773982c9b\") " pod="openshift-marketplace/certified-operators-blsq6" Oct 08 15:22:55 crc kubenswrapper[4624]: I1008 15:22:55.027928 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svchx\" (UniqueName: \"kubernetes.io/projected/bbbff273-31b6-4f25-bf0e-e90773982c9b-kube-api-access-svchx\") pod \"certified-operators-blsq6\" (UID: \"bbbff273-31b6-4f25-bf0e-e90773982c9b\") " pod="openshift-marketplace/certified-operators-blsq6" Oct 08 15:22:55 crc kubenswrapper[4624]: I1008 15:22:55.142018 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-blsq6" Oct 08 15:22:55 crc kubenswrapper[4624]: I1008 15:22:55.988409 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-blsq6"] Oct 08 15:22:56 crc kubenswrapper[4624]: I1008 15:22:56.249823 4624 generic.go:334] "Generic (PLEG): container finished" podID="bbbff273-31b6-4f25-bf0e-e90773982c9b" containerID="55ebc53d4afbc7beed39d68a1943304d4db53be63563f3e0d9162e732f3ff3ed" exitCode=0 Oct 08 15:22:56 crc kubenswrapper[4624]: I1008 15:22:56.249911 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blsq6" event={"ID":"bbbff273-31b6-4f25-bf0e-e90773982c9b","Type":"ContainerDied","Data":"55ebc53d4afbc7beed39d68a1943304d4db53be63563f3e0d9162e732f3ff3ed"} Oct 08 15:22:56 crc kubenswrapper[4624]: I1008 15:22:56.250130 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blsq6" event={"ID":"bbbff273-31b6-4f25-bf0e-e90773982c9b","Type":"ContainerStarted","Data":"bf6889a6a735ddac761acee298c935d3d0fb97fae6be488fc814cb0c19a2a0d2"} Oct 08 15:23:00 crc kubenswrapper[4624]: I1008 15:23:00.076285 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:23:00 crc kubenswrapper[4624]: I1008 15:23:00.076887 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:23:03 crc kubenswrapper[4624]: I1008 15:23:03.336031 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blsq6" event={"ID":"bbbff273-31b6-4f25-bf0e-e90773982c9b","Type":"ContainerStarted","Data":"57c91cf055eb993e2d8ebeec76b45e5ee21c90bc20422b44fe8cebb2d18bbc9c"} Oct 08 15:23:05 crc kubenswrapper[4624]: I1008 15:23:05.355468 4624 generic.go:334] "Generic (PLEG): container finished" podID="bbbff273-31b6-4f25-bf0e-e90773982c9b" containerID="57c91cf055eb993e2d8ebeec76b45e5ee21c90bc20422b44fe8cebb2d18bbc9c" exitCode=0 Oct 08 15:23:05 crc kubenswrapper[4624]: I1008 15:23:05.355588 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blsq6" event={"ID":"bbbff273-31b6-4f25-bf0e-e90773982c9b","Type":"ContainerDied","Data":"57c91cf055eb993e2d8ebeec76b45e5ee21c90bc20422b44fe8cebb2d18bbc9c"} Oct 08 15:23:06 crc kubenswrapper[4624]: I1008 15:23:06.367049 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blsq6" event={"ID":"bbbff273-31b6-4f25-bf0e-e90773982c9b","Type":"ContainerStarted","Data":"7455260eece3325676a2850e1e2bded4353dbc45efb74b05658adcf7deb7290a"} Oct 08 15:23:06 crc kubenswrapper[4624]: I1008 15:23:06.390016 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-blsq6" podStartSLOduration=2.712610071 podStartE2EDuration="12.389993493s" podCreationTimestamp="2025-10-08 15:22:54 +0000 UTC" firstStartedPulling="2025-10-08 15:22:56.251538922 +0000 UTC m=+3601.402473999" lastFinishedPulling="2025-10-08 15:23:05.928922344 +0000 UTC m=+3611.079857421" observedRunningTime="2025-10-08 15:23:06.383956018 +0000 UTC m=+3611.534891115" watchObservedRunningTime="2025-10-08 15:23:06.389993493 +0000 UTC m=+3611.540928570" Oct 08 15:23:15 crc kubenswrapper[4624]: I1008 15:23:15.143946 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-blsq6" Oct 08 15:23:15 crc kubenswrapper[4624]: I1008 15:23:15.144547 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-blsq6" Oct 08 15:23:16 crc kubenswrapper[4624]: I1008 15:23:16.190838 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-blsq6" podUID="bbbff273-31b6-4f25-bf0e-e90773982c9b" containerName="registry-server" probeResult="failure" output=< Oct 08 15:23:16 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:23:16 crc kubenswrapper[4624]: > Oct 08 15:23:25 crc kubenswrapper[4624]: I1008 15:23:25.209360 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-blsq6" Oct 08 15:23:25 crc kubenswrapper[4624]: I1008 15:23:25.266741 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-blsq6" Oct 08 15:23:25 crc kubenswrapper[4624]: I1008 15:23:25.385664 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-blsq6"] Oct 08 15:23:25 crc kubenswrapper[4624]: I1008 15:23:25.435171 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dk74n"] Oct 08 15:23:25 crc kubenswrapper[4624]: I1008 15:23:25.442761 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dk74n" podUID="72543833-c00b-453e-af36-5b6f32dd3d71" containerName="registry-server" containerID="cri-o://d6484a26988c13526057ca408a2c1131b7093175f88a851f4840528cd28e59a8" gracePeriod=2 Oct 08 15:23:26 crc kubenswrapper[4624]: I1008 15:23:26.577180 4624 generic.go:334] "Generic (PLEG): container finished" podID="72543833-c00b-453e-af36-5b6f32dd3d71" containerID="d6484a26988c13526057ca408a2c1131b7093175f88a851f4840528cd28e59a8" exitCode=0 Oct 08 15:23:26 crc kubenswrapper[4624]: I1008 15:23:26.577255 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk74n" event={"ID":"72543833-c00b-453e-af36-5b6f32dd3d71","Type":"ContainerDied","Data":"d6484a26988c13526057ca408a2c1131b7093175f88a851f4840528cd28e59a8"} Oct 08 15:23:27 crc kubenswrapper[4624]: I1008 15:23:27.116038 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk74n" Oct 08 15:23:27 crc kubenswrapper[4624]: I1008 15:23:27.196730 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72543833-c00b-453e-af36-5b6f32dd3d71-utilities\") pod \"72543833-c00b-453e-af36-5b6f32dd3d71\" (UID: \"72543833-c00b-453e-af36-5b6f32dd3d71\") " Oct 08 15:23:27 crc kubenswrapper[4624]: I1008 15:23:27.196845 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72543833-c00b-453e-af36-5b6f32dd3d71-catalog-content\") pod \"72543833-c00b-453e-af36-5b6f32dd3d71\" (UID: \"72543833-c00b-453e-af36-5b6f32dd3d71\") " Oct 08 15:23:27 crc kubenswrapper[4624]: I1008 15:23:27.196916 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h472k\" (UniqueName: \"kubernetes.io/projected/72543833-c00b-453e-af36-5b6f32dd3d71-kube-api-access-h472k\") pod \"72543833-c00b-453e-af36-5b6f32dd3d71\" (UID: \"72543833-c00b-453e-af36-5b6f32dd3d71\") " Oct 08 15:23:27 crc kubenswrapper[4624]: I1008 15:23:27.202327 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72543833-c00b-453e-af36-5b6f32dd3d71-utilities" (OuterVolumeSpecName: "utilities") pod "72543833-c00b-453e-af36-5b6f32dd3d71" (UID: "72543833-c00b-453e-af36-5b6f32dd3d71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:23:27 crc kubenswrapper[4624]: I1008 15:23:27.258015 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72543833-c00b-453e-af36-5b6f32dd3d71-kube-api-access-h472k" (OuterVolumeSpecName: "kube-api-access-h472k") pod "72543833-c00b-453e-af36-5b6f32dd3d71" (UID: "72543833-c00b-453e-af36-5b6f32dd3d71"). InnerVolumeSpecName "kube-api-access-h472k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:23:27 crc kubenswrapper[4624]: I1008 15:23:27.277994 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72543833-c00b-453e-af36-5b6f32dd3d71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72543833-c00b-453e-af36-5b6f32dd3d71" (UID: "72543833-c00b-453e-af36-5b6f32dd3d71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:23:27 crc kubenswrapper[4624]: I1008 15:23:27.299495 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72543833-c00b-453e-af36-5b6f32dd3d71-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:23:27 crc kubenswrapper[4624]: I1008 15:23:27.299523 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72543833-c00b-453e-af36-5b6f32dd3d71-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:23:27 crc kubenswrapper[4624]: I1008 15:23:27.299545 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h472k\" (UniqueName: \"kubernetes.io/projected/72543833-c00b-453e-af36-5b6f32dd3d71-kube-api-access-h472k\") on node \"crc\" DevicePath \"\"" Oct 08 15:23:27 crc kubenswrapper[4624]: I1008 15:23:27.589683 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk74n" event={"ID":"72543833-c00b-453e-af36-5b6f32dd3d71","Type":"ContainerDied","Data":"0d82c634e5e9c8f711f4ab7b337a8aa018dd9e0d006dcbed9af3ff08e6e184a2"} Oct 08 15:23:27 crc kubenswrapper[4624]: I1008 15:23:27.589768 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk74n" Oct 08 15:23:27 crc kubenswrapper[4624]: I1008 15:23:27.592703 4624 scope.go:117] "RemoveContainer" containerID="d6484a26988c13526057ca408a2c1131b7093175f88a851f4840528cd28e59a8" Oct 08 15:23:27 crc kubenswrapper[4624]: I1008 15:23:27.620814 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dk74n"] Oct 08 15:23:27 crc kubenswrapper[4624]: I1008 15:23:27.632940 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dk74n"] Oct 08 15:23:27 crc kubenswrapper[4624]: I1008 15:23:27.645477 4624 scope.go:117] "RemoveContainer" containerID="5d1a4fa65a19032840f8e66c07dfaeaa835a030e38cff2590af6c2f9a97bd79e" Oct 08 15:23:27 crc kubenswrapper[4624]: I1008 15:23:27.673100 4624 scope.go:117] "RemoveContainer" containerID="2d2aa07baeae2c8471911d86cd6502b78448a52143437a1958e9d93b210b8b16" Oct 08 15:23:29 crc kubenswrapper[4624]: I1008 15:23:29.487120 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72543833-c00b-453e-af36-5b6f32dd3d71" path="/var/lib/kubelet/pods/72543833-c00b-453e-af36-5b6f32dd3d71/volumes" Oct 08 15:23:30 crc kubenswrapper[4624]: I1008 15:23:30.076972 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:23:30 crc kubenswrapper[4624]: I1008 15:23:30.077840 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:23:30 crc kubenswrapper[4624]: I1008 15:23:30.077888 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 15:23:30 crc kubenswrapper[4624]: I1008 15:23:30.078664 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 15:23:30 crc kubenswrapper[4624]: I1008 15:23:30.078729 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" gracePeriod=600 Oct 08 15:23:30 crc kubenswrapper[4624]: E1008 15:23:30.212522 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:23:30 crc kubenswrapper[4624]: I1008 15:23:30.620962 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" exitCode=0 Oct 08 15:23:30 crc kubenswrapper[4624]: I1008 15:23:30.621028 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df"} Oct 08 15:23:30 crc kubenswrapper[4624]: I1008 15:23:30.621245 4624 scope.go:117] "RemoveContainer" containerID="b2a3b8e8854c1eef3db1ce6eb26bff455ba6daf2a25aafe06bd729d74f88ee2a" Oct 08 15:23:30 crc kubenswrapper[4624]: I1008 15:23:30.621753 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:23:30 crc kubenswrapper[4624]: E1008 15:23:30.621984 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:23:44 crc kubenswrapper[4624]: I1008 15:23:44.466469 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:23:44 crc kubenswrapper[4624]: E1008 15:23:44.468220 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:23:53 crc kubenswrapper[4624]: E1008 15:23:53.638862 4624 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.154:40084->38.102.83.154:39627: write tcp 38.102.83.154:40084->38.102.83.154:39627: write: connection reset by peer Oct 08 15:23:54 crc kubenswrapper[4624]: E1008 15:23:54.098980 4624 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.154:40108->38.102.83.154:39627: write tcp 38.102.83.154:40108->38.102.83.154:39627: write: connection reset by peer Oct 08 15:23:55 crc kubenswrapper[4624]: I1008 15:23:55.471924 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:23:55 crc kubenswrapper[4624]: E1008 15:23:55.475738 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:24:07 crc kubenswrapper[4624]: I1008 15:24:07.466091 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:24:07 crc kubenswrapper[4624]: E1008 15:24:07.466926 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:24:21 crc kubenswrapper[4624]: I1008 15:24:21.468803 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:24:21 crc kubenswrapper[4624]: E1008 15:24:21.470043 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:24:35 crc kubenswrapper[4624]: I1008 15:24:35.474224 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:24:35 crc kubenswrapper[4624]: E1008 15:24:35.481396 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:24:49 crc kubenswrapper[4624]: I1008 15:24:49.466040 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:24:49 crc kubenswrapper[4624]: E1008 15:24:49.468118 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:25:03 crc kubenswrapper[4624]: I1008 15:25:03.469482 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:25:03 crc kubenswrapper[4624]: E1008 15:25:03.470438 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:25:16 crc kubenswrapper[4624]: I1008 15:25:16.466564 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:25:16 crc kubenswrapper[4624]: E1008 15:25:16.467342 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:25:27 crc kubenswrapper[4624]: I1008 15:25:27.466089 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:25:27 crc kubenswrapper[4624]: E1008 15:25:27.466906 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:25:34 crc kubenswrapper[4624]: E1008 15:25:34.741714 4624 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.154:54876->38.102.83.154:39627: write tcp 38.102.83.154:54876->38.102.83.154:39627: write: broken pipe Oct 08 15:25:39 crc kubenswrapper[4624]: I1008 15:25:39.466945 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:25:39 crc kubenswrapper[4624]: E1008 15:25:39.468796 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:25:50 crc kubenswrapper[4624]: I1008 15:25:50.465866 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:25:50 crc kubenswrapper[4624]: E1008 15:25:50.466762 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:25:52 crc kubenswrapper[4624]: I1008 15:25:52.635062 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lqssd"] Oct 08 15:25:52 crc kubenswrapper[4624]: E1008 15:25:52.646155 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72543833-c00b-453e-af36-5b6f32dd3d71" containerName="extract-utilities" Oct 08 15:25:52 crc kubenswrapper[4624]: I1008 15:25:52.646217 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="72543833-c00b-453e-af36-5b6f32dd3d71" containerName="extract-utilities" Oct 08 15:25:52 crc kubenswrapper[4624]: E1008 15:25:52.646886 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72543833-c00b-453e-af36-5b6f32dd3d71" containerName="extract-content" Oct 08 15:25:52 crc kubenswrapper[4624]: I1008 15:25:52.646902 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="72543833-c00b-453e-af36-5b6f32dd3d71" containerName="extract-content" Oct 08 15:25:52 crc kubenswrapper[4624]: E1008 15:25:52.646931 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72543833-c00b-453e-af36-5b6f32dd3d71" containerName="registry-server" Oct 08 15:25:52 crc kubenswrapper[4624]: I1008 15:25:52.646939 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="72543833-c00b-453e-af36-5b6f32dd3d71" containerName="registry-server" Oct 08 15:25:52 crc kubenswrapper[4624]: I1008 15:25:52.649749 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="72543833-c00b-453e-af36-5b6f32dd3d71" containerName="registry-server" Oct 08 15:25:52 crc kubenswrapper[4624]: I1008 15:25:52.689000 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqssd" Oct 08 15:25:52 crc kubenswrapper[4624]: I1008 15:25:52.754610 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb947b58-77fc-4cbe-aa29-06c1f276e442-utilities\") pod \"redhat-marketplace-lqssd\" (UID: \"eb947b58-77fc-4cbe-aa29-06c1f276e442\") " pod="openshift-marketplace/redhat-marketplace-lqssd" Oct 08 15:25:52 crc kubenswrapper[4624]: I1008 15:25:52.755148 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb947b58-77fc-4cbe-aa29-06c1f276e442-catalog-content\") pod \"redhat-marketplace-lqssd\" (UID: \"eb947b58-77fc-4cbe-aa29-06c1f276e442\") " pod="openshift-marketplace/redhat-marketplace-lqssd" Oct 08 15:25:52 crc kubenswrapper[4624]: I1008 15:25:52.755383 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7857g\" (UniqueName: \"kubernetes.io/projected/eb947b58-77fc-4cbe-aa29-06c1f276e442-kube-api-access-7857g\") pod \"redhat-marketplace-lqssd\" (UID: \"eb947b58-77fc-4cbe-aa29-06c1f276e442\") " pod="openshift-marketplace/redhat-marketplace-lqssd" Oct 08 15:25:52 crc kubenswrapper[4624]: I1008 15:25:52.857599 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb947b58-77fc-4cbe-aa29-06c1f276e442-catalog-content\") pod \"redhat-marketplace-lqssd\" (UID: \"eb947b58-77fc-4cbe-aa29-06c1f276e442\") " pod="openshift-marketplace/redhat-marketplace-lqssd" Oct 08 15:25:52 crc kubenswrapper[4624]: I1008 15:25:52.858092 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7857g\" (UniqueName: \"kubernetes.io/projected/eb947b58-77fc-4cbe-aa29-06c1f276e442-kube-api-access-7857g\") pod \"redhat-marketplace-lqssd\" (UID: \"eb947b58-77fc-4cbe-aa29-06c1f276e442\") " pod="openshift-marketplace/redhat-marketplace-lqssd" Oct 08 15:25:52 crc kubenswrapper[4624]: I1008 15:25:52.858167 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb947b58-77fc-4cbe-aa29-06c1f276e442-utilities\") pod \"redhat-marketplace-lqssd\" (UID: \"eb947b58-77fc-4cbe-aa29-06c1f276e442\") " pod="openshift-marketplace/redhat-marketplace-lqssd" Oct 08 15:25:53 crc kubenswrapper[4624]: I1008 15:25:53.038615 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqssd"] Oct 08 15:25:53 crc kubenswrapper[4624]: I1008 15:25:53.104771 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb947b58-77fc-4cbe-aa29-06c1f276e442-catalog-content\") pod \"redhat-marketplace-lqssd\" (UID: \"eb947b58-77fc-4cbe-aa29-06c1f276e442\") " pod="openshift-marketplace/redhat-marketplace-lqssd" Oct 08 15:25:53 crc kubenswrapper[4624]: I1008 15:25:53.105945 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb947b58-77fc-4cbe-aa29-06c1f276e442-utilities\") pod \"redhat-marketplace-lqssd\" (UID: \"eb947b58-77fc-4cbe-aa29-06c1f276e442\") " pod="openshift-marketplace/redhat-marketplace-lqssd" Oct 08 15:25:53 crc kubenswrapper[4624]: I1008 15:25:53.210382 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7857g\" (UniqueName: \"kubernetes.io/projected/eb947b58-77fc-4cbe-aa29-06c1f276e442-kube-api-access-7857g\") pod \"redhat-marketplace-lqssd\" (UID: \"eb947b58-77fc-4cbe-aa29-06c1f276e442\") " pod="openshift-marketplace/redhat-marketplace-lqssd" Oct 08 15:25:53 crc kubenswrapper[4624]: I1008 15:25:53.451010 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqssd" Oct 08 15:25:55 crc kubenswrapper[4624]: I1008 15:25:55.337086 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqssd"] Oct 08 15:25:55 crc kubenswrapper[4624]: I1008 15:25:55.967938 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqssd" event={"ID":"eb947b58-77fc-4cbe-aa29-06c1f276e442","Type":"ContainerDied","Data":"446a85e47784722d0062286856a42a6390526f5aae7311c45c4d9af147145076"} Oct 08 15:25:55 crc kubenswrapper[4624]: I1008 15:25:55.967813 4624 generic.go:334] "Generic (PLEG): container finished" podID="eb947b58-77fc-4cbe-aa29-06c1f276e442" containerID="446a85e47784722d0062286856a42a6390526f5aae7311c45c4d9af147145076" exitCode=0 Oct 08 15:25:55 crc kubenswrapper[4624]: I1008 15:25:55.968490 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqssd" event={"ID":"eb947b58-77fc-4cbe-aa29-06c1f276e442","Type":"ContainerStarted","Data":"8e40db499281702745b62e34e69ca26ce6aa20926c801950518f14179017f914"} Oct 08 15:25:55 crc kubenswrapper[4624]: I1008 15:25:55.976737 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 15:25:57 crc kubenswrapper[4624]: I1008 15:25:57.989873 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqssd" event={"ID":"eb947b58-77fc-4cbe-aa29-06c1f276e442","Type":"ContainerStarted","Data":"998b23c1980adc531d8458bc9d6bc0ca197810a3c36e0ba7a979d9fe0b7484e4"} Oct 08 15:26:00 crc kubenswrapper[4624]: I1008 15:26:00.009134 4624 generic.go:334] "Generic (PLEG): container finished" podID="eb947b58-77fc-4cbe-aa29-06c1f276e442" containerID="998b23c1980adc531d8458bc9d6bc0ca197810a3c36e0ba7a979d9fe0b7484e4" exitCode=0 Oct 08 15:26:00 crc kubenswrapper[4624]: I1008 15:26:00.009333 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqssd" event={"ID":"eb947b58-77fc-4cbe-aa29-06c1f276e442","Type":"ContainerDied","Data":"998b23c1980adc531d8458bc9d6bc0ca197810a3c36e0ba7a979d9fe0b7484e4"} Oct 08 15:26:01 crc kubenswrapper[4624]: I1008 15:26:01.021744 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqssd" event={"ID":"eb947b58-77fc-4cbe-aa29-06c1f276e442","Type":"ContainerStarted","Data":"3b524ebcf06474c5fee087e79bc375366029bcf1543859b5d78ce34c6255a1a8"} Oct 08 15:26:01 crc kubenswrapper[4624]: I1008 15:26:01.050684 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lqssd" podStartSLOduration=4.41777998 podStartE2EDuration="9.050666277s" podCreationTimestamp="2025-10-08 15:25:52 +0000 UTC" firstStartedPulling="2025-10-08 15:25:55.970506998 +0000 UTC m=+3781.121442075" lastFinishedPulling="2025-10-08 15:26:00.603393295 +0000 UTC m=+3785.754328372" observedRunningTime="2025-10-08 15:26:01.046478833 +0000 UTC m=+3786.197413920" watchObservedRunningTime="2025-10-08 15:26:01.050666277 +0000 UTC m=+3786.201601354" Oct 08 15:26:02 crc kubenswrapper[4624]: I1008 15:26:02.465777 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:26:02 crc kubenswrapper[4624]: E1008 15:26:02.466696 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:26:03 crc kubenswrapper[4624]: I1008 15:26:03.452067 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lqssd" Oct 08 15:26:03 crc kubenswrapper[4624]: I1008 15:26:03.452145 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lqssd" Oct 08 15:26:04 crc kubenswrapper[4624]: I1008 15:26:04.514696 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-lqssd" podUID="eb947b58-77fc-4cbe-aa29-06c1f276e442" containerName="registry-server" probeResult="failure" output=< Oct 08 15:26:04 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:26:04 crc kubenswrapper[4624]: > Oct 08 15:26:13 crc kubenswrapper[4624]: I1008 15:26:13.466815 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:26:13 crc kubenswrapper[4624]: E1008 15:26:13.467672 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:26:13 crc kubenswrapper[4624]: I1008 15:26:13.498524 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lqssd" Oct 08 15:26:13 crc kubenswrapper[4624]: I1008 15:26:13.553849 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lqssd" Oct 08 15:26:13 crc kubenswrapper[4624]: I1008 15:26:13.737353 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqssd"] Oct 08 15:26:15 crc kubenswrapper[4624]: I1008 15:26:15.142803 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lqssd" podUID="eb947b58-77fc-4cbe-aa29-06c1f276e442" containerName="registry-server" containerID="cri-o://3b524ebcf06474c5fee087e79bc375366029bcf1543859b5d78ce34c6255a1a8" gracePeriod=2 Oct 08 15:26:15 crc kubenswrapper[4624]: I1008 15:26:15.858380 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqssd" Oct 08 15:26:15 crc kubenswrapper[4624]: I1008 15:26:15.961114 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7857g\" (UniqueName: \"kubernetes.io/projected/eb947b58-77fc-4cbe-aa29-06c1f276e442-kube-api-access-7857g\") pod \"eb947b58-77fc-4cbe-aa29-06c1f276e442\" (UID: \"eb947b58-77fc-4cbe-aa29-06c1f276e442\") " Oct 08 15:26:15 crc kubenswrapper[4624]: I1008 15:26:15.961437 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb947b58-77fc-4cbe-aa29-06c1f276e442-utilities\") pod \"eb947b58-77fc-4cbe-aa29-06c1f276e442\" (UID: \"eb947b58-77fc-4cbe-aa29-06c1f276e442\") " Oct 08 15:26:15 crc kubenswrapper[4624]: I1008 15:26:15.961589 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb947b58-77fc-4cbe-aa29-06c1f276e442-catalog-content\") pod \"eb947b58-77fc-4cbe-aa29-06c1f276e442\" (UID: \"eb947b58-77fc-4cbe-aa29-06c1f276e442\") " Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:15.994717 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb947b58-77fc-4cbe-aa29-06c1f276e442-utilities" (OuterVolumeSpecName: "utilities") pod "eb947b58-77fc-4cbe-aa29-06c1f276e442" (UID: "eb947b58-77fc-4cbe-aa29-06c1f276e442"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.058042 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb947b58-77fc-4cbe-aa29-06c1f276e442-kube-api-access-7857g" (OuterVolumeSpecName: "kube-api-access-7857g") pod "eb947b58-77fc-4cbe-aa29-06c1f276e442" (UID: "eb947b58-77fc-4cbe-aa29-06c1f276e442"). InnerVolumeSpecName "kube-api-access-7857g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.065059 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb947b58-77fc-4cbe-aa29-06c1f276e442-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.065094 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7857g\" (UniqueName: \"kubernetes.io/projected/eb947b58-77fc-4cbe-aa29-06c1f276e442-kube-api-access-7857g\") on node \"crc\" DevicePath \"\"" Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.069861 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb947b58-77fc-4cbe-aa29-06c1f276e442-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb947b58-77fc-4cbe-aa29-06c1f276e442" (UID: "eb947b58-77fc-4cbe-aa29-06c1f276e442"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.167219 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb947b58-77fc-4cbe-aa29-06c1f276e442-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.172208 4624 generic.go:334] "Generic (PLEG): container finished" podID="eb947b58-77fc-4cbe-aa29-06c1f276e442" containerID="3b524ebcf06474c5fee087e79bc375366029bcf1543859b5d78ce34c6255a1a8" exitCode=0 Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.172258 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqssd" event={"ID":"eb947b58-77fc-4cbe-aa29-06c1f276e442","Type":"ContainerDied","Data":"3b524ebcf06474c5fee087e79bc375366029bcf1543859b5d78ce34c6255a1a8"} Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.172282 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqssd" Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.172308 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqssd" event={"ID":"eb947b58-77fc-4cbe-aa29-06c1f276e442","Type":"ContainerDied","Data":"8e40db499281702745b62e34e69ca26ce6aa20926c801950518f14179017f914"} Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.172332 4624 scope.go:117] "RemoveContainer" containerID="3b524ebcf06474c5fee087e79bc375366029bcf1543859b5d78ce34c6255a1a8" Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.230124 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqssd"] Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.242110 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqssd"] Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.257128 4624 scope.go:117] "RemoveContainer" containerID="998b23c1980adc531d8458bc9d6bc0ca197810a3c36e0ba7a979d9fe0b7484e4" Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.323133 4624 scope.go:117] "RemoveContainer" containerID="446a85e47784722d0062286856a42a6390526f5aae7311c45c4d9af147145076" Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.362113 4624 scope.go:117] "RemoveContainer" containerID="3b524ebcf06474c5fee087e79bc375366029bcf1543859b5d78ce34c6255a1a8" Oct 08 15:26:16 crc kubenswrapper[4624]: E1008 15:26:16.366958 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b524ebcf06474c5fee087e79bc375366029bcf1543859b5d78ce34c6255a1a8\": container with ID starting with 3b524ebcf06474c5fee087e79bc375366029bcf1543859b5d78ce34c6255a1a8 not found: ID does not exist" containerID="3b524ebcf06474c5fee087e79bc375366029bcf1543859b5d78ce34c6255a1a8" Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.369468 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b524ebcf06474c5fee087e79bc375366029bcf1543859b5d78ce34c6255a1a8"} err="failed to get container status \"3b524ebcf06474c5fee087e79bc375366029bcf1543859b5d78ce34c6255a1a8\": rpc error: code = NotFound desc = could not find container \"3b524ebcf06474c5fee087e79bc375366029bcf1543859b5d78ce34c6255a1a8\": container with ID starting with 3b524ebcf06474c5fee087e79bc375366029bcf1543859b5d78ce34c6255a1a8 not found: ID does not exist" Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.369540 4624 scope.go:117] "RemoveContainer" containerID="998b23c1980adc531d8458bc9d6bc0ca197810a3c36e0ba7a979d9fe0b7484e4" Oct 08 15:26:16 crc kubenswrapper[4624]: E1008 15:26:16.370305 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"998b23c1980adc531d8458bc9d6bc0ca197810a3c36e0ba7a979d9fe0b7484e4\": container with ID starting with 998b23c1980adc531d8458bc9d6bc0ca197810a3c36e0ba7a979d9fe0b7484e4 not found: ID does not exist" containerID="998b23c1980adc531d8458bc9d6bc0ca197810a3c36e0ba7a979d9fe0b7484e4" Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.370334 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"998b23c1980adc531d8458bc9d6bc0ca197810a3c36e0ba7a979d9fe0b7484e4"} err="failed to get container status \"998b23c1980adc531d8458bc9d6bc0ca197810a3c36e0ba7a979d9fe0b7484e4\": rpc error: code = NotFound desc = could not find container \"998b23c1980adc531d8458bc9d6bc0ca197810a3c36e0ba7a979d9fe0b7484e4\": container with ID starting with 998b23c1980adc531d8458bc9d6bc0ca197810a3c36e0ba7a979d9fe0b7484e4 not found: ID does not exist" Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.370351 4624 scope.go:117] "RemoveContainer" containerID="446a85e47784722d0062286856a42a6390526f5aae7311c45c4d9af147145076" Oct 08 15:26:16 crc kubenswrapper[4624]: E1008 15:26:16.370710 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"446a85e47784722d0062286856a42a6390526f5aae7311c45c4d9af147145076\": container with ID starting with 446a85e47784722d0062286856a42a6390526f5aae7311c45c4d9af147145076 not found: ID does not exist" containerID="446a85e47784722d0062286856a42a6390526f5aae7311c45c4d9af147145076" Oct 08 15:26:16 crc kubenswrapper[4624]: I1008 15:26:16.370732 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"446a85e47784722d0062286856a42a6390526f5aae7311c45c4d9af147145076"} err="failed to get container status \"446a85e47784722d0062286856a42a6390526f5aae7311c45c4d9af147145076\": rpc error: code = NotFound desc = could not find container \"446a85e47784722d0062286856a42a6390526f5aae7311c45c4d9af147145076\": container with ID starting with 446a85e47784722d0062286856a42a6390526f5aae7311c45c4d9af147145076 not found: ID does not exist" Oct 08 15:26:17 crc kubenswrapper[4624]: I1008 15:26:17.478384 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb947b58-77fc-4cbe-aa29-06c1f276e442" path="/var/lib/kubelet/pods/eb947b58-77fc-4cbe-aa29-06c1f276e442/volumes" Oct 08 15:26:26 crc kubenswrapper[4624]: I1008 15:26:26.466195 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:26:26 crc kubenswrapper[4624]: E1008 15:26:26.467041 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:26:40 crc kubenswrapper[4624]: I1008 15:26:40.466372 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:26:40 crc kubenswrapper[4624]: E1008 15:26:40.468849 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:26:54 crc kubenswrapper[4624]: I1008 15:26:54.466598 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:26:54 crc kubenswrapper[4624]: E1008 15:26:54.467479 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:27:07 crc kubenswrapper[4624]: I1008 15:27:07.465982 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:27:07 crc kubenswrapper[4624]: E1008 15:27:07.468817 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:27:19 crc kubenswrapper[4624]: I1008 15:27:19.465732 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:27:19 crc kubenswrapper[4624]: E1008 15:27:19.467491 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:27:34 crc kubenswrapper[4624]: I1008 15:27:34.466210 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:27:34 crc kubenswrapper[4624]: E1008 15:27:34.467072 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:27:45 crc kubenswrapper[4624]: I1008 15:27:45.473209 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:27:45 crc kubenswrapper[4624]: E1008 15:27:45.473907 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:27:59 crc kubenswrapper[4624]: I1008 15:27:59.466897 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:27:59 crc kubenswrapper[4624]: E1008 15:27:59.467741 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:28:12 crc kubenswrapper[4624]: I1008 15:28:12.466257 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:28:12 crc kubenswrapper[4624]: E1008 15:28:12.467010 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:28:25 crc kubenswrapper[4624]: I1008 15:28:25.466630 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:28:25 crc kubenswrapper[4624]: E1008 15:28:25.467562 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:28:37 crc kubenswrapper[4624]: I1008 15:28:37.465674 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:28:38 crc kubenswrapper[4624]: I1008 15:28:38.584853 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"1f436fa71e649ae33432d5cc5ffd7f1d451c5e440a8bbbd964c9f92129f31ff9"} Oct 08 15:28:59 crc kubenswrapper[4624]: I1008 15:28:59.981173 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fsq7r"] Oct 08 15:28:59 crc kubenswrapper[4624]: E1008 15:28:59.984470 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb947b58-77fc-4cbe-aa29-06c1f276e442" containerName="extract-content" Oct 08 15:28:59 crc kubenswrapper[4624]: I1008 15:28:59.984510 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb947b58-77fc-4cbe-aa29-06c1f276e442" containerName="extract-content" Oct 08 15:28:59 crc kubenswrapper[4624]: E1008 15:28:59.985061 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb947b58-77fc-4cbe-aa29-06c1f276e442" containerName="registry-server" Oct 08 15:28:59 crc kubenswrapper[4624]: I1008 15:28:59.985075 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb947b58-77fc-4cbe-aa29-06c1f276e442" containerName="registry-server" Oct 08 15:28:59 crc kubenswrapper[4624]: E1008 15:28:59.985105 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb947b58-77fc-4cbe-aa29-06c1f276e442" containerName="extract-utilities" Oct 08 15:28:59 crc kubenswrapper[4624]: I1008 15:28:59.985114 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb947b58-77fc-4cbe-aa29-06c1f276e442" containerName="extract-utilities" Oct 08 15:28:59 crc kubenswrapper[4624]: I1008 15:28:59.986073 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb947b58-77fc-4cbe-aa29-06c1f276e442" containerName="registry-server" Oct 08 15:28:59 crc kubenswrapper[4624]: I1008 15:28:59.995655 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsq7r" Oct 08 15:29:00 crc kubenswrapper[4624]: I1008 15:29:00.024152 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fsq7r"] Oct 08 15:29:00 crc kubenswrapper[4624]: I1008 15:29:00.098272 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ce4953-c0bd-4344-ad9a-5f9510705792-utilities\") pod \"community-operators-fsq7r\" (UID: \"63ce4953-c0bd-4344-ad9a-5f9510705792\") " pod="openshift-marketplace/community-operators-fsq7r" Oct 08 15:29:00 crc kubenswrapper[4624]: I1008 15:29:00.098356 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ldjt\" (UniqueName: \"kubernetes.io/projected/63ce4953-c0bd-4344-ad9a-5f9510705792-kube-api-access-8ldjt\") pod \"community-operators-fsq7r\" (UID: \"63ce4953-c0bd-4344-ad9a-5f9510705792\") " pod="openshift-marketplace/community-operators-fsq7r" Oct 08 15:29:00 crc kubenswrapper[4624]: I1008 15:29:00.098455 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ce4953-c0bd-4344-ad9a-5f9510705792-catalog-content\") pod \"community-operators-fsq7r\" (UID: \"63ce4953-c0bd-4344-ad9a-5f9510705792\") " pod="openshift-marketplace/community-operators-fsq7r" Oct 08 15:29:00 crc kubenswrapper[4624]: I1008 15:29:00.201039 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ce4953-c0bd-4344-ad9a-5f9510705792-catalog-content\") pod \"community-operators-fsq7r\" (UID: \"63ce4953-c0bd-4344-ad9a-5f9510705792\") " pod="openshift-marketplace/community-operators-fsq7r" Oct 08 15:29:00 crc kubenswrapper[4624]: I1008 15:29:00.201213 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ce4953-c0bd-4344-ad9a-5f9510705792-utilities\") pod \"community-operators-fsq7r\" (UID: \"63ce4953-c0bd-4344-ad9a-5f9510705792\") " pod="openshift-marketplace/community-operators-fsq7r" Oct 08 15:29:00 crc kubenswrapper[4624]: I1008 15:29:00.201254 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ldjt\" (UniqueName: \"kubernetes.io/projected/63ce4953-c0bd-4344-ad9a-5f9510705792-kube-api-access-8ldjt\") pod \"community-operators-fsq7r\" (UID: \"63ce4953-c0bd-4344-ad9a-5f9510705792\") " pod="openshift-marketplace/community-operators-fsq7r" Oct 08 15:29:00 crc kubenswrapper[4624]: I1008 15:29:00.205707 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ce4953-c0bd-4344-ad9a-5f9510705792-catalog-content\") pod \"community-operators-fsq7r\" (UID: \"63ce4953-c0bd-4344-ad9a-5f9510705792\") " pod="openshift-marketplace/community-operators-fsq7r" Oct 08 15:29:00 crc kubenswrapper[4624]: I1008 15:29:00.206805 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ce4953-c0bd-4344-ad9a-5f9510705792-utilities\") pod \"community-operators-fsq7r\" (UID: \"63ce4953-c0bd-4344-ad9a-5f9510705792\") " pod="openshift-marketplace/community-operators-fsq7r" Oct 08 15:29:00 crc kubenswrapper[4624]: I1008 15:29:00.227213 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ldjt\" (UniqueName: \"kubernetes.io/projected/63ce4953-c0bd-4344-ad9a-5f9510705792-kube-api-access-8ldjt\") pod \"community-operators-fsq7r\" (UID: \"63ce4953-c0bd-4344-ad9a-5f9510705792\") " pod="openshift-marketplace/community-operators-fsq7r" Oct 08 15:29:00 crc kubenswrapper[4624]: I1008 15:29:00.320878 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsq7r" Oct 08 15:29:01 crc kubenswrapper[4624]: I1008 15:29:01.662349 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fsq7r"] Oct 08 15:29:01 crc kubenswrapper[4624]: I1008 15:29:01.851182 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsq7r" event={"ID":"63ce4953-c0bd-4344-ad9a-5f9510705792","Type":"ContainerStarted","Data":"f85bc7b10eab27002e9d3ff8651eaa40a64be2a2a87329e2c90ccc70979bc9ae"} Oct 08 15:29:02 crc kubenswrapper[4624]: I1008 15:29:02.862108 4624 generic.go:334] "Generic (PLEG): container finished" podID="63ce4953-c0bd-4344-ad9a-5f9510705792" containerID="6f5342f957954d7a228d193c67fab9791f20648d84626c188208335a7009818f" exitCode=0 Oct 08 15:29:02 crc kubenswrapper[4624]: I1008 15:29:02.862262 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsq7r" event={"ID":"63ce4953-c0bd-4344-ad9a-5f9510705792","Type":"ContainerDied","Data":"6f5342f957954d7a228d193c67fab9791f20648d84626c188208335a7009818f"} Oct 08 15:29:04 crc kubenswrapper[4624]: I1008 15:29:04.885861 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsq7r" event={"ID":"63ce4953-c0bd-4344-ad9a-5f9510705792","Type":"ContainerStarted","Data":"8017e2a137a5283a56f84cef2037910674b6ce2658c92fa18aad5892c5efc07a"} Oct 08 15:29:06 crc kubenswrapper[4624]: I1008 15:29:06.906480 4624 generic.go:334] "Generic (PLEG): container finished" podID="63ce4953-c0bd-4344-ad9a-5f9510705792" containerID="8017e2a137a5283a56f84cef2037910674b6ce2658c92fa18aad5892c5efc07a" exitCode=0 Oct 08 15:29:06 crc kubenswrapper[4624]: I1008 15:29:06.906553 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsq7r" event={"ID":"63ce4953-c0bd-4344-ad9a-5f9510705792","Type":"ContainerDied","Data":"8017e2a137a5283a56f84cef2037910674b6ce2658c92fa18aad5892c5efc07a"} Oct 08 15:29:08 crc kubenswrapper[4624]: I1008 15:29:08.925872 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsq7r" event={"ID":"63ce4953-c0bd-4344-ad9a-5f9510705792","Type":"ContainerStarted","Data":"4da1aff0973cbddea90dee60d04c6157438197cf00e356e9d93691ff285a762a"} Oct 08 15:29:08 crc kubenswrapper[4624]: I1008 15:29:08.957542 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fsq7r" podStartSLOduration=5.382364297 podStartE2EDuration="9.949548472s" podCreationTimestamp="2025-10-08 15:28:59 +0000 UTC" firstStartedPulling="2025-10-08 15:29:02.864131793 +0000 UTC m=+3968.015066860" lastFinishedPulling="2025-10-08 15:29:07.431315958 +0000 UTC m=+3972.582251035" observedRunningTime="2025-10-08 15:29:08.944278946 +0000 UTC m=+3974.095214023" watchObservedRunningTime="2025-10-08 15:29:08.949548472 +0000 UTC m=+3974.100483549" Oct 08 15:29:10 crc kubenswrapper[4624]: I1008 15:29:10.321506 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fsq7r" Oct 08 15:29:10 crc kubenswrapper[4624]: I1008 15:29:10.321844 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fsq7r" Oct 08 15:29:11 crc kubenswrapper[4624]: I1008 15:29:11.370576 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fsq7r" podUID="63ce4953-c0bd-4344-ad9a-5f9510705792" containerName="registry-server" probeResult="failure" output=< Oct 08 15:29:11 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:29:11 crc kubenswrapper[4624]: > Oct 08 15:29:20 crc kubenswrapper[4624]: I1008 15:29:20.474474 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fsq7r" Oct 08 15:29:20 crc kubenswrapper[4624]: I1008 15:29:20.534382 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fsq7r" Oct 08 15:29:20 crc kubenswrapper[4624]: I1008 15:29:20.774886 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fsq7r"] Oct 08 15:29:22 crc kubenswrapper[4624]: I1008 15:29:22.047243 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fsq7r" podUID="63ce4953-c0bd-4344-ad9a-5f9510705792" containerName="registry-server" containerID="cri-o://4da1aff0973cbddea90dee60d04c6157438197cf00e356e9d93691ff285a762a" gracePeriod=2 Oct 08 15:29:23 crc kubenswrapper[4624]: I1008 15:29:23.095840 4624 generic.go:334] "Generic (PLEG): container finished" podID="63ce4953-c0bd-4344-ad9a-5f9510705792" containerID="4da1aff0973cbddea90dee60d04c6157438197cf00e356e9d93691ff285a762a" exitCode=0 Oct 08 15:29:23 crc kubenswrapper[4624]: I1008 15:29:23.096209 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsq7r" event={"ID":"63ce4953-c0bd-4344-ad9a-5f9510705792","Type":"ContainerDied","Data":"4da1aff0973cbddea90dee60d04c6157438197cf00e356e9d93691ff285a762a"} Oct 08 15:29:23 crc kubenswrapper[4624]: I1008 15:29:23.343574 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsq7r" Oct 08 15:29:23 crc kubenswrapper[4624]: I1008 15:29:23.497224 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ldjt\" (UniqueName: \"kubernetes.io/projected/63ce4953-c0bd-4344-ad9a-5f9510705792-kube-api-access-8ldjt\") pod \"63ce4953-c0bd-4344-ad9a-5f9510705792\" (UID: \"63ce4953-c0bd-4344-ad9a-5f9510705792\") " Oct 08 15:29:23 crc kubenswrapper[4624]: I1008 15:29:23.497338 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ce4953-c0bd-4344-ad9a-5f9510705792-utilities\") pod \"63ce4953-c0bd-4344-ad9a-5f9510705792\" (UID: \"63ce4953-c0bd-4344-ad9a-5f9510705792\") " Oct 08 15:29:23 crc kubenswrapper[4624]: I1008 15:29:23.497658 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ce4953-c0bd-4344-ad9a-5f9510705792-catalog-content\") pod \"63ce4953-c0bd-4344-ad9a-5f9510705792\" (UID: \"63ce4953-c0bd-4344-ad9a-5f9510705792\") " Oct 08 15:29:23 crc kubenswrapper[4624]: I1008 15:29:23.502112 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63ce4953-c0bd-4344-ad9a-5f9510705792-utilities" (OuterVolumeSpecName: "utilities") pod "63ce4953-c0bd-4344-ad9a-5f9510705792" (UID: "63ce4953-c0bd-4344-ad9a-5f9510705792"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:29:23 crc kubenswrapper[4624]: I1008 15:29:23.530717 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63ce4953-c0bd-4344-ad9a-5f9510705792-kube-api-access-8ldjt" (OuterVolumeSpecName: "kube-api-access-8ldjt") pod "63ce4953-c0bd-4344-ad9a-5f9510705792" (UID: "63ce4953-c0bd-4344-ad9a-5f9510705792"). InnerVolumeSpecName "kube-api-access-8ldjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:29:23 crc kubenswrapper[4624]: I1008 15:29:23.590402 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63ce4953-c0bd-4344-ad9a-5f9510705792-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63ce4953-c0bd-4344-ad9a-5f9510705792" (UID: "63ce4953-c0bd-4344-ad9a-5f9510705792"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:29:23 crc kubenswrapper[4624]: I1008 15:29:23.600523 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ce4953-c0bd-4344-ad9a-5f9510705792-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:29:23 crc kubenswrapper[4624]: I1008 15:29:23.600681 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ldjt\" (UniqueName: \"kubernetes.io/projected/63ce4953-c0bd-4344-ad9a-5f9510705792-kube-api-access-8ldjt\") on node \"crc\" DevicePath \"\"" Oct 08 15:29:23 crc kubenswrapper[4624]: I1008 15:29:23.600765 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ce4953-c0bd-4344-ad9a-5f9510705792-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:29:24 crc kubenswrapper[4624]: I1008 15:29:24.109348 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsq7r" event={"ID":"63ce4953-c0bd-4344-ad9a-5f9510705792","Type":"ContainerDied","Data":"f85bc7b10eab27002e9d3ff8651eaa40a64be2a2a87329e2c90ccc70979bc9ae"} Oct 08 15:29:24 crc kubenswrapper[4624]: I1008 15:29:24.109563 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsq7r" Oct 08 15:29:24 crc kubenswrapper[4624]: I1008 15:29:24.110269 4624 scope.go:117] "RemoveContainer" containerID="4da1aff0973cbddea90dee60d04c6157438197cf00e356e9d93691ff285a762a" Oct 08 15:29:24 crc kubenswrapper[4624]: I1008 15:29:24.166599 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fsq7r"] Oct 08 15:29:24 crc kubenswrapper[4624]: I1008 15:29:24.173456 4624 scope.go:117] "RemoveContainer" containerID="8017e2a137a5283a56f84cef2037910674b6ce2658c92fa18aad5892c5efc07a" Oct 08 15:29:24 crc kubenswrapper[4624]: I1008 15:29:24.177338 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fsq7r"] Oct 08 15:29:24 crc kubenswrapper[4624]: I1008 15:29:24.206806 4624 scope.go:117] "RemoveContainer" containerID="6f5342f957954d7a228d193c67fab9791f20648d84626c188208335a7009818f" Oct 08 15:29:25 crc kubenswrapper[4624]: I1008 15:29:25.477964 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63ce4953-c0bd-4344-ad9a-5f9510705792" path="/var/lib/kubelet/pods/63ce4953-c0bd-4344-ad9a-5f9510705792/volumes" Oct 08 15:30:00 crc kubenswrapper[4624]: I1008 15:30:00.889938 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7"] Oct 08 15:30:00 crc kubenswrapper[4624]: E1008 15:30:00.901834 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ce4953-c0bd-4344-ad9a-5f9510705792" containerName="extract-content" Oct 08 15:30:00 crc kubenswrapper[4624]: I1008 15:30:00.901873 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ce4953-c0bd-4344-ad9a-5f9510705792" containerName="extract-content" Oct 08 15:30:00 crc kubenswrapper[4624]: E1008 15:30:00.901903 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ce4953-c0bd-4344-ad9a-5f9510705792" containerName="registry-server" Oct 08 15:30:00 crc kubenswrapper[4624]: I1008 15:30:00.901909 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ce4953-c0bd-4344-ad9a-5f9510705792" containerName="registry-server" Oct 08 15:30:00 crc kubenswrapper[4624]: E1008 15:30:00.901920 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ce4953-c0bd-4344-ad9a-5f9510705792" containerName="extract-utilities" Oct 08 15:30:00 crc kubenswrapper[4624]: I1008 15:30:00.901927 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ce4953-c0bd-4344-ad9a-5f9510705792" containerName="extract-utilities" Oct 08 15:30:00 crc kubenswrapper[4624]: I1008 15:30:00.903807 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="63ce4953-c0bd-4344-ad9a-5f9510705792" containerName="registry-server" Oct 08 15:30:00 crc kubenswrapper[4624]: I1008 15:30:00.924121 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7" Oct 08 15:30:00 crc kubenswrapper[4624]: I1008 15:30:00.935827 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt25p\" (UniqueName: \"kubernetes.io/projected/90596db7-9869-43ee-bebd-750ae0727cc8-kube-api-access-kt25p\") pod \"collect-profiles-29332290-dr8n7\" (UID: \"90596db7-9869-43ee-bebd-750ae0727cc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7" Oct 08 15:30:00 crc kubenswrapper[4624]: I1008 15:30:00.936040 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90596db7-9869-43ee-bebd-750ae0727cc8-secret-volume\") pod \"collect-profiles-29332290-dr8n7\" (UID: \"90596db7-9869-43ee-bebd-750ae0727cc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7" Oct 08 15:30:00 crc kubenswrapper[4624]: I1008 15:30:00.936433 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90596db7-9869-43ee-bebd-750ae0727cc8-config-volume\") pod \"collect-profiles-29332290-dr8n7\" (UID: \"90596db7-9869-43ee-bebd-750ae0727cc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7" Oct 08 15:30:00 crc kubenswrapper[4624]: I1008 15:30:00.956111 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 15:30:00 crc kubenswrapper[4624]: I1008 15:30:00.956112 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 15:30:01 crc kubenswrapper[4624]: I1008 15:30:01.039186 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt25p\" (UniqueName: \"kubernetes.io/projected/90596db7-9869-43ee-bebd-750ae0727cc8-kube-api-access-kt25p\") pod \"collect-profiles-29332290-dr8n7\" (UID: \"90596db7-9869-43ee-bebd-750ae0727cc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7" Oct 08 15:30:01 crc kubenswrapper[4624]: I1008 15:30:01.039852 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90596db7-9869-43ee-bebd-750ae0727cc8-secret-volume\") pod \"collect-profiles-29332290-dr8n7\" (UID: \"90596db7-9869-43ee-bebd-750ae0727cc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7" Oct 08 15:30:01 crc kubenswrapper[4624]: I1008 15:30:01.040065 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90596db7-9869-43ee-bebd-750ae0727cc8-config-volume\") pod \"collect-profiles-29332290-dr8n7\" (UID: \"90596db7-9869-43ee-bebd-750ae0727cc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7" Oct 08 15:30:01 crc kubenswrapper[4624]: I1008 15:30:01.044876 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7"] Oct 08 15:30:01 crc kubenswrapper[4624]: I1008 15:30:01.050686 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90596db7-9869-43ee-bebd-750ae0727cc8-config-volume\") pod \"collect-profiles-29332290-dr8n7\" (UID: \"90596db7-9869-43ee-bebd-750ae0727cc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7" Oct 08 15:30:01 crc kubenswrapper[4624]: I1008 15:30:01.067990 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90596db7-9869-43ee-bebd-750ae0727cc8-secret-volume\") pod \"collect-profiles-29332290-dr8n7\" (UID: \"90596db7-9869-43ee-bebd-750ae0727cc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7" Oct 08 15:30:01 crc kubenswrapper[4624]: I1008 15:30:01.069853 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt25p\" (UniqueName: \"kubernetes.io/projected/90596db7-9869-43ee-bebd-750ae0727cc8-kube-api-access-kt25p\") pod \"collect-profiles-29332290-dr8n7\" (UID: \"90596db7-9869-43ee-bebd-750ae0727cc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7" Oct 08 15:30:01 crc kubenswrapper[4624]: I1008 15:30:01.281236 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7" Oct 08 15:30:02 crc kubenswrapper[4624]: I1008 15:30:02.982427 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7"] Oct 08 15:30:03 crc kubenswrapper[4624]: I1008 15:30:03.533963 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7" event={"ID":"90596db7-9869-43ee-bebd-750ae0727cc8","Type":"ContainerStarted","Data":"ed1fd668b44334640a6f5a3f0f901fdb4031fcb6efa44287095edf5cd64d8237"} Oct 08 15:30:03 crc kubenswrapper[4624]: I1008 15:30:03.535111 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7" event={"ID":"90596db7-9869-43ee-bebd-750ae0727cc8","Type":"ContainerStarted","Data":"b5fb4fcf9ca07b9518f743ed4c65fb9eed3bf83b4130e7d9802b557f08ba857e"} Oct 08 15:30:03 crc kubenswrapper[4624]: I1008 15:30:03.561105 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7" podStartSLOduration=3.559120412 podStartE2EDuration="3.559120412s" podCreationTimestamp="2025-10-08 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 15:30:03.552816836 +0000 UTC m=+4028.703751913" watchObservedRunningTime="2025-10-08 15:30:03.559120412 +0000 UTC m=+4028.710055489" Oct 08 15:30:04 crc kubenswrapper[4624]: I1008 15:30:04.544824 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7" event={"ID":"90596db7-9869-43ee-bebd-750ae0727cc8","Type":"ContainerDied","Data":"ed1fd668b44334640a6f5a3f0f901fdb4031fcb6efa44287095edf5cd64d8237"} Oct 08 15:30:04 crc kubenswrapper[4624]: I1008 15:30:04.546334 4624 generic.go:334] "Generic (PLEG): container finished" podID="90596db7-9869-43ee-bebd-750ae0727cc8" containerID="ed1fd668b44334640a6f5a3f0f901fdb4031fcb6efa44287095edf5cd64d8237" exitCode=0 Oct 08 15:30:06 crc kubenswrapper[4624]: I1008 15:30:06.123336 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7" Oct 08 15:30:06 crc kubenswrapper[4624]: I1008 15:30:06.261697 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90596db7-9869-43ee-bebd-750ae0727cc8-secret-volume\") pod \"90596db7-9869-43ee-bebd-750ae0727cc8\" (UID: \"90596db7-9869-43ee-bebd-750ae0727cc8\") " Oct 08 15:30:06 crc kubenswrapper[4624]: I1008 15:30:06.261979 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90596db7-9869-43ee-bebd-750ae0727cc8-config-volume\") pod \"90596db7-9869-43ee-bebd-750ae0727cc8\" (UID: \"90596db7-9869-43ee-bebd-750ae0727cc8\") " Oct 08 15:30:06 crc kubenswrapper[4624]: I1008 15:30:06.262004 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt25p\" (UniqueName: \"kubernetes.io/projected/90596db7-9869-43ee-bebd-750ae0727cc8-kube-api-access-kt25p\") pod \"90596db7-9869-43ee-bebd-750ae0727cc8\" (UID: \"90596db7-9869-43ee-bebd-750ae0727cc8\") " Oct 08 15:30:06 crc kubenswrapper[4624]: I1008 15:30:06.268815 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90596db7-9869-43ee-bebd-750ae0727cc8-config-volume" (OuterVolumeSpecName: "config-volume") pod "90596db7-9869-43ee-bebd-750ae0727cc8" (UID: "90596db7-9869-43ee-bebd-750ae0727cc8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 15:30:06 crc kubenswrapper[4624]: I1008 15:30:06.286982 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90596db7-9869-43ee-bebd-750ae0727cc8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "90596db7-9869-43ee-bebd-750ae0727cc8" (UID: "90596db7-9869-43ee-bebd-750ae0727cc8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:30:06 crc kubenswrapper[4624]: I1008 15:30:06.287036 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90596db7-9869-43ee-bebd-750ae0727cc8-kube-api-access-kt25p" (OuterVolumeSpecName: "kube-api-access-kt25p") pod "90596db7-9869-43ee-bebd-750ae0727cc8" (UID: "90596db7-9869-43ee-bebd-750ae0727cc8"). InnerVolumeSpecName "kube-api-access-kt25p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:30:06 crc kubenswrapper[4624]: I1008 15:30:06.364208 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90596db7-9869-43ee-bebd-750ae0727cc8-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 15:30:06 crc kubenswrapper[4624]: I1008 15:30:06.364252 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt25p\" (UniqueName: \"kubernetes.io/projected/90596db7-9869-43ee-bebd-750ae0727cc8-kube-api-access-kt25p\") on node \"crc\" DevicePath \"\"" Oct 08 15:30:06 crc kubenswrapper[4624]: I1008 15:30:06.364263 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90596db7-9869-43ee-bebd-750ae0727cc8-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 15:30:06 crc kubenswrapper[4624]: I1008 15:30:06.581794 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7" event={"ID":"90596db7-9869-43ee-bebd-750ae0727cc8","Type":"ContainerDied","Data":"b5fb4fcf9ca07b9518f743ed4c65fb9eed3bf83b4130e7d9802b557f08ba857e"} Oct 08 15:30:06 crc kubenswrapper[4624]: I1008 15:30:06.581854 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7" Oct 08 15:30:06 crc kubenswrapper[4624]: I1008 15:30:06.581872 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5fb4fcf9ca07b9518f743ed4c65fb9eed3bf83b4130e7d9802b557f08ba857e" Oct 08 15:30:07 crc kubenswrapper[4624]: I1008 15:30:07.239493 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw"] Oct 08 15:30:07 crc kubenswrapper[4624]: I1008 15:30:07.248865 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332245-prrxw"] Oct 08 15:30:07 crc kubenswrapper[4624]: I1008 15:30:07.485797 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f42483b-c604-404c-8921-4157c19584bb" path="/var/lib/kubelet/pods/8f42483b-c604-404c-8921-4157c19584bb/volumes" Oct 08 15:30:43 crc kubenswrapper[4624]: I1008 15:30:43.970317 4624 scope.go:117] "RemoveContainer" containerID="891f491334ccca991cd1a055ee26dd97857de25b8f6d243758140e97dd0f5825" Oct 08 15:31:00 crc kubenswrapper[4624]: I1008 15:31:00.079305 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:31:00 crc kubenswrapper[4624]: I1008 15:31:00.082105 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:31:22 crc kubenswrapper[4624]: I1008 15:31:22.915392 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bdqrt"] Oct 08 15:31:22 crc kubenswrapper[4624]: E1008 15:31:22.920407 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90596db7-9869-43ee-bebd-750ae0727cc8" containerName="collect-profiles" Oct 08 15:31:22 crc kubenswrapper[4624]: I1008 15:31:22.920471 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="90596db7-9869-43ee-bebd-750ae0727cc8" containerName="collect-profiles" Oct 08 15:31:22 crc kubenswrapper[4624]: I1008 15:31:22.923086 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="90596db7-9869-43ee-bebd-750ae0727cc8" containerName="collect-profiles" Oct 08 15:31:22 crc kubenswrapper[4624]: I1008 15:31:22.930897 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdqrt" Oct 08 15:31:22 crc kubenswrapper[4624]: I1008 15:31:22.992284 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff-catalog-content\") pod \"redhat-operators-bdqrt\" (UID: \"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff\") " pod="openshift-marketplace/redhat-operators-bdqrt" Oct 08 15:31:22 crc kubenswrapper[4624]: I1008 15:31:22.992374 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pdzj\" (UniqueName: \"kubernetes.io/projected/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff-kube-api-access-9pdzj\") pod \"redhat-operators-bdqrt\" (UID: \"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff\") " pod="openshift-marketplace/redhat-operators-bdqrt" Oct 08 15:31:22 crc kubenswrapper[4624]: I1008 15:31:22.992588 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff-utilities\") pod \"redhat-operators-bdqrt\" (UID: \"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff\") " pod="openshift-marketplace/redhat-operators-bdqrt" Oct 08 15:31:23 crc kubenswrapper[4624]: I1008 15:31:23.094856 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff-catalog-content\") pod \"redhat-operators-bdqrt\" (UID: \"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff\") " pod="openshift-marketplace/redhat-operators-bdqrt" Oct 08 15:31:23 crc kubenswrapper[4624]: I1008 15:31:23.095157 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pdzj\" (UniqueName: \"kubernetes.io/projected/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff-kube-api-access-9pdzj\") pod \"redhat-operators-bdqrt\" (UID: \"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff\") " pod="openshift-marketplace/redhat-operators-bdqrt" Oct 08 15:31:23 crc kubenswrapper[4624]: I1008 15:31:23.095255 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff-utilities\") pod \"redhat-operators-bdqrt\" (UID: \"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff\") " pod="openshift-marketplace/redhat-operators-bdqrt" Oct 08 15:31:23 crc kubenswrapper[4624]: I1008 15:31:23.099944 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff-catalog-content\") pod \"redhat-operators-bdqrt\" (UID: \"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff\") " pod="openshift-marketplace/redhat-operators-bdqrt" Oct 08 15:31:23 crc kubenswrapper[4624]: I1008 15:31:23.100483 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff-utilities\") pod \"redhat-operators-bdqrt\" (UID: \"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff\") " pod="openshift-marketplace/redhat-operators-bdqrt" Oct 08 15:31:23 crc kubenswrapper[4624]: I1008 15:31:23.101526 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bdqrt"] Oct 08 15:31:23 crc kubenswrapper[4624]: I1008 15:31:23.141042 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pdzj\" (UniqueName: \"kubernetes.io/projected/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff-kube-api-access-9pdzj\") pod \"redhat-operators-bdqrt\" (UID: \"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff\") " pod="openshift-marketplace/redhat-operators-bdqrt" Oct 08 15:31:23 crc kubenswrapper[4624]: I1008 15:31:23.272603 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdqrt" Oct 08 15:31:24 crc kubenswrapper[4624]: I1008 15:31:24.703494 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bdqrt"] Oct 08 15:31:25 crc kubenswrapper[4624]: I1008 15:31:25.354436 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdqrt" event={"ID":"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff","Type":"ContainerDied","Data":"f2eedcc5c191a0205b1fe622a99357c2646a2a09fa58e0b2faecfb23b9dc22ba"} Oct 08 15:31:25 crc kubenswrapper[4624]: I1008 15:31:25.355208 4624 generic.go:334] "Generic (PLEG): container finished" podID="4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" containerID="f2eedcc5c191a0205b1fe622a99357c2646a2a09fa58e0b2faecfb23b9dc22ba" exitCode=0 Oct 08 15:31:25 crc kubenswrapper[4624]: I1008 15:31:25.355306 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdqrt" event={"ID":"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff","Type":"ContainerStarted","Data":"7bf657230e22281bcf949bfa6f7231bd8c7dd0abd9a080a1818cc985d2bb2ea3"} Oct 08 15:31:25 crc kubenswrapper[4624]: I1008 15:31:25.361400 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 15:31:27 crc kubenswrapper[4624]: I1008 15:31:27.374623 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdqrt" event={"ID":"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff","Type":"ContainerStarted","Data":"e74bd164058bd7b1fb0817d7b728f2706be801bc4207b12ead62af143cf97062"} Oct 08 15:31:30 crc kubenswrapper[4624]: I1008 15:31:30.105738 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:31:30 crc kubenswrapper[4624]: I1008 15:31:30.106653 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:31:34 crc kubenswrapper[4624]: E1008 15:31:34.651378 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b8af7b7_4e36_492f_8569_28e2ffe4c2ff.slice/crio-e74bd164058bd7b1fb0817d7b728f2706be801bc4207b12ead62af143cf97062.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b8af7b7_4e36_492f_8569_28e2ffe4c2ff.slice/crio-conmon-e74bd164058bd7b1fb0817d7b728f2706be801bc4207b12ead62af143cf97062.scope\": RecentStats: unable to find data in memory cache]" Oct 08 15:31:35 crc kubenswrapper[4624]: I1008 15:31:35.451504 4624 generic.go:334] "Generic (PLEG): container finished" podID="4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" containerID="e74bd164058bd7b1fb0817d7b728f2706be801bc4207b12ead62af143cf97062" exitCode=0 Oct 08 15:31:35 crc kubenswrapper[4624]: I1008 15:31:35.451881 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdqrt" event={"ID":"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff","Type":"ContainerDied","Data":"e74bd164058bd7b1fb0817d7b728f2706be801bc4207b12ead62af143cf97062"} Oct 08 15:31:36 crc kubenswrapper[4624]: I1008 15:31:36.464390 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdqrt" event={"ID":"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff","Type":"ContainerStarted","Data":"28f92cc3cefe43ed42b102631bafb0ba4762abda144cea448d910173fe7006ff"} Oct 08 15:31:36 crc kubenswrapper[4624]: I1008 15:31:36.490021 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bdqrt" podStartSLOduration=3.653575373 podStartE2EDuration="14.48855058s" podCreationTimestamp="2025-10-08 15:31:22 +0000 UTC" firstStartedPulling="2025-10-08 15:31:25.355897574 +0000 UTC m=+4110.506832651" lastFinishedPulling="2025-10-08 15:31:36.190872781 +0000 UTC m=+4121.341807858" observedRunningTime="2025-10-08 15:31:36.487671877 +0000 UTC m=+4121.638606954" watchObservedRunningTime="2025-10-08 15:31:36.48855058 +0000 UTC m=+4121.639485657" Oct 08 15:31:43 crc kubenswrapper[4624]: I1008 15:31:43.273213 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bdqrt" Oct 08 15:31:43 crc kubenswrapper[4624]: I1008 15:31:43.274815 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bdqrt" Oct 08 15:31:44 crc kubenswrapper[4624]: I1008 15:31:44.323792 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bdqrt" podUID="4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" containerName="registry-server" probeResult="failure" output=< Oct 08 15:31:44 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:31:44 crc kubenswrapper[4624]: > Oct 08 15:31:54 crc kubenswrapper[4624]: I1008 15:31:54.368112 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bdqrt" podUID="4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" containerName="registry-server" probeResult="failure" output=< Oct 08 15:31:54 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:31:54 crc kubenswrapper[4624]: > Oct 08 15:32:00 crc kubenswrapper[4624]: I1008 15:32:00.077065 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:32:00 crc kubenswrapper[4624]: I1008 15:32:00.080849 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:32:00 crc kubenswrapper[4624]: I1008 15:32:00.081938 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 15:32:00 crc kubenswrapper[4624]: I1008 15:32:00.084628 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f436fa71e649ae33432d5cc5ffd7f1d451c5e440a8bbbd964c9f92129f31ff9"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 15:32:00 crc kubenswrapper[4624]: I1008 15:32:00.084847 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://1f436fa71e649ae33432d5cc5ffd7f1d451c5e440a8bbbd964c9f92129f31ff9" gracePeriod=600 Oct 08 15:32:00 crc kubenswrapper[4624]: I1008 15:32:00.720150 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"1f436fa71e649ae33432d5cc5ffd7f1d451c5e440a8bbbd964c9f92129f31ff9"} Oct 08 15:32:00 crc kubenswrapper[4624]: I1008 15:32:00.720671 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="1f436fa71e649ae33432d5cc5ffd7f1d451c5e440a8bbbd964c9f92129f31ff9" exitCode=0 Oct 08 15:32:00 crc kubenswrapper[4624]: I1008 15:32:00.723382 4624 scope.go:117] "RemoveContainer" containerID="ff5736f0bef7a35604d84451504d0862c9aa144465d935906f78899cf3e701df" Oct 08 15:32:01 crc kubenswrapper[4624]: I1008 15:32:01.733008 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b"} Oct 08 15:32:04 crc kubenswrapper[4624]: I1008 15:32:04.362367 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bdqrt" podUID="4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" containerName="registry-server" probeResult="failure" output=< Oct 08 15:32:04 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:32:04 crc kubenswrapper[4624]: > Oct 08 15:32:14 crc kubenswrapper[4624]: I1008 15:32:14.322186 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bdqrt" podUID="4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" containerName="registry-server" probeResult="failure" output=< Oct 08 15:32:14 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:32:14 crc kubenswrapper[4624]: > Oct 08 15:32:23 crc kubenswrapper[4624]: I1008 15:32:23.393684 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bdqrt" Oct 08 15:32:23 crc kubenswrapper[4624]: I1008 15:32:23.464454 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bdqrt" Oct 08 15:32:24 crc kubenswrapper[4624]: I1008 15:32:24.104783 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bdqrt"] Oct 08 15:32:24 crc kubenswrapper[4624]: I1008 15:32:24.928084 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bdqrt" podUID="4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" containerName="registry-server" containerID="cri-o://28f92cc3cefe43ed42b102631bafb0ba4762abda144cea448d910173fe7006ff" gracePeriod=2 Oct 08 15:32:25 crc kubenswrapper[4624]: I1008 15:32:25.942000 4624 generic.go:334] "Generic (PLEG): container finished" podID="4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" containerID="28f92cc3cefe43ed42b102631bafb0ba4762abda144cea448d910173fe7006ff" exitCode=0 Oct 08 15:32:25 crc kubenswrapper[4624]: I1008 15:32:25.942075 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdqrt" event={"ID":"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff","Type":"ContainerDied","Data":"28f92cc3cefe43ed42b102631bafb0ba4762abda144cea448d910173fe7006ff"} Oct 08 15:32:26 crc kubenswrapper[4624]: I1008 15:32:26.447119 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdqrt" Oct 08 15:32:26 crc kubenswrapper[4624]: I1008 15:32:26.539969 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pdzj\" (UniqueName: \"kubernetes.io/projected/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff-kube-api-access-9pdzj\") pod \"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff\" (UID: \"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff\") " Oct 08 15:32:26 crc kubenswrapper[4624]: I1008 15:32:26.540185 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff-utilities\") pod \"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff\" (UID: \"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff\") " Oct 08 15:32:26 crc kubenswrapper[4624]: I1008 15:32:26.540222 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff-catalog-content\") pod \"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff\" (UID: \"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff\") " Oct 08 15:32:26 crc kubenswrapper[4624]: I1008 15:32:26.547418 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff-utilities" (OuterVolumeSpecName: "utilities") pod "4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" (UID: "4b8af7b7-4e36-492f-8569-28e2ffe4c2ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:32:26 crc kubenswrapper[4624]: I1008 15:32:26.559443 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff-kube-api-access-9pdzj" (OuterVolumeSpecName: "kube-api-access-9pdzj") pod "4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" (UID: "4b8af7b7-4e36-492f-8569-28e2ffe4c2ff"). InnerVolumeSpecName "kube-api-access-9pdzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:32:26 crc kubenswrapper[4624]: I1008 15:32:26.643616 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pdzj\" (UniqueName: \"kubernetes.io/projected/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff-kube-api-access-9pdzj\") on node \"crc\" DevicePath \"\"" Oct 08 15:32:26 crc kubenswrapper[4624]: I1008 15:32:26.643664 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:32:26 crc kubenswrapper[4624]: I1008 15:32:26.759445 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" (UID: "4b8af7b7-4e36-492f-8569-28e2ffe4c2ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:32:26 crc kubenswrapper[4624]: I1008 15:32:26.848270 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:32:26 crc kubenswrapper[4624]: I1008 15:32:26.956593 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdqrt" event={"ID":"4b8af7b7-4e36-492f-8569-28e2ffe4c2ff","Type":"ContainerDied","Data":"7bf657230e22281bcf949bfa6f7231bd8c7dd0abd9a080a1818cc985d2bb2ea3"} Oct 08 15:32:26 crc kubenswrapper[4624]: I1008 15:32:26.956664 4624 scope.go:117] "RemoveContainer" containerID="28f92cc3cefe43ed42b102631bafb0ba4762abda144cea448d910173fe7006ff" Oct 08 15:32:26 crc kubenswrapper[4624]: I1008 15:32:26.958178 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdqrt" Oct 08 15:32:27 crc kubenswrapper[4624]: I1008 15:32:27.000140 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bdqrt"] Oct 08 15:32:27 crc kubenswrapper[4624]: I1008 15:32:27.014282 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bdqrt"] Oct 08 15:32:27 crc kubenswrapper[4624]: I1008 15:32:27.015249 4624 scope.go:117] "RemoveContainer" containerID="e74bd164058bd7b1fb0817d7b728f2706be801bc4207b12ead62af143cf97062" Oct 08 15:32:27 crc kubenswrapper[4624]: I1008 15:32:27.046501 4624 scope.go:117] "RemoveContainer" containerID="f2eedcc5c191a0205b1fe622a99357c2646a2a09fa58e0b2faecfb23b9dc22ba" Oct 08 15:32:27 crc kubenswrapper[4624]: I1008 15:32:27.478290 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" path="/var/lib/kubelet/pods/4b8af7b7-4e36-492f-8569-28e2ffe4c2ff/volumes" Oct 08 15:33:11 crc kubenswrapper[4624]: I1008 15:33:11.624134 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tqvm5"] Oct 08 15:33:11 crc kubenswrapper[4624]: E1008 15:33:11.628756 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" containerName="extract-content" Oct 08 15:33:11 crc kubenswrapper[4624]: I1008 15:33:11.628786 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" containerName="extract-content" Oct 08 15:33:11 crc kubenswrapper[4624]: E1008 15:33:11.628820 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" containerName="extract-utilities" Oct 08 15:33:11 crc kubenswrapper[4624]: I1008 15:33:11.628830 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" containerName="extract-utilities" Oct 08 15:33:11 crc kubenswrapper[4624]: E1008 15:33:11.629067 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" containerName="registry-server" Oct 08 15:33:11 crc kubenswrapper[4624]: I1008 15:33:11.629100 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" containerName="registry-server" Oct 08 15:33:11 crc kubenswrapper[4624]: I1008 15:33:11.630202 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b8af7b7-4e36-492f-8569-28e2ffe4c2ff" containerName="registry-server" Oct 08 15:33:11 crc kubenswrapper[4624]: I1008 15:33:11.641656 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tqvm5" Oct 08 15:33:11 crc kubenswrapper[4624]: I1008 15:33:11.673616 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tqvm5"] Oct 08 15:33:11 crc kubenswrapper[4624]: I1008 15:33:11.830035 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91d90576-b6ef-4ee0-8c0b-5e6762216f05-utilities\") pod \"certified-operators-tqvm5\" (UID: \"91d90576-b6ef-4ee0-8c0b-5e6762216f05\") " pod="openshift-marketplace/certified-operators-tqvm5" Oct 08 15:33:11 crc kubenswrapper[4624]: I1008 15:33:11.830096 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91d90576-b6ef-4ee0-8c0b-5e6762216f05-catalog-content\") pod \"certified-operators-tqvm5\" (UID: \"91d90576-b6ef-4ee0-8c0b-5e6762216f05\") " pod="openshift-marketplace/certified-operators-tqvm5" Oct 08 15:33:11 crc kubenswrapper[4624]: I1008 15:33:11.830265 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6zqm\" (UniqueName: \"kubernetes.io/projected/91d90576-b6ef-4ee0-8c0b-5e6762216f05-kube-api-access-j6zqm\") pod \"certified-operators-tqvm5\" (UID: \"91d90576-b6ef-4ee0-8c0b-5e6762216f05\") " pod="openshift-marketplace/certified-operators-tqvm5" Oct 08 15:33:11 crc kubenswrapper[4624]: I1008 15:33:11.931571 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91d90576-b6ef-4ee0-8c0b-5e6762216f05-catalog-content\") pod \"certified-operators-tqvm5\" (UID: \"91d90576-b6ef-4ee0-8c0b-5e6762216f05\") " pod="openshift-marketplace/certified-operators-tqvm5" Oct 08 15:33:11 crc kubenswrapper[4624]: I1008 15:33:11.931823 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6zqm\" (UniqueName: \"kubernetes.io/projected/91d90576-b6ef-4ee0-8c0b-5e6762216f05-kube-api-access-j6zqm\") pod \"certified-operators-tqvm5\" (UID: \"91d90576-b6ef-4ee0-8c0b-5e6762216f05\") " pod="openshift-marketplace/certified-operators-tqvm5" Oct 08 15:33:11 crc kubenswrapper[4624]: I1008 15:33:11.931879 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91d90576-b6ef-4ee0-8c0b-5e6762216f05-utilities\") pod \"certified-operators-tqvm5\" (UID: \"91d90576-b6ef-4ee0-8c0b-5e6762216f05\") " pod="openshift-marketplace/certified-operators-tqvm5" Oct 08 15:33:11 crc kubenswrapper[4624]: I1008 15:33:11.932037 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91d90576-b6ef-4ee0-8c0b-5e6762216f05-catalog-content\") pod \"certified-operators-tqvm5\" (UID: \"91d90576-b6ef-4ee0-8c0b-5e6762216f05\") " pod="openshift-marketplace/certified-operators-tqvm5" Oct 08 15:33:11 crc kubenswrapper[4624]: I1008 15:33:11.932309 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91d90576-b6ef-4ee0-8c0b-5e6762216f05-utilities\") pod \"certified-operators-tqvm5\" (UID: \"91d90576-b6ef-4ee0-8c0b-5e6762216f05\") " pod="openshift-marketplace/certified-operators-tqvm5" Oct 08 15:33:11 crc kubenswrapper[4624]: I1008 15:33:11.960437 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6zqm\" (UniqueName: \"kubernetes.io/projected/91d90576-b6ef-4ee0-8c0b-5e6762216f05-kube-api-access-j6zqm\") pod \"certified-operators-tqvm5\" (UID: \"91d90576-b6ef-4ee0-8c0b-5e6762216f05\") " pod="openshift-marketplace/certified-operators-tqvm5" Oct 08 15:33:11 crc kubenswrapper[4624]: I1008 15:33:11.976431 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tqvm5" Oct 08 15:33:12 crc kubenswrapper[4624]: I1008 15:33:12.490604 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tqvm5"] Oct 08 15:33:13 crc kubenswrapper[4624]: I1008 15:33:13.355296 4624 generic.go:334] "Generic (PLEG): container finished" podID="91d90576-b6ef-4ee0-8c0b-5e6762216f05" containerID="5cd688c0e2ae229dd1775a5c4b7cffae8e262b7f05a2cfa3c517ad8a5bdce2d6" exitCode=0 Oct 08 15:33:13 crc kubenswrapper[4624]: I1008 15:33:13.355573 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqvm5" event={"ID":"91d90576-b6ef-4ee0-8c0b-5e6762216f05","Type":"ContainerDied","Data":"5cd688c0e2ae229dd1775a5c4b7cffae8e262b7f05a2cfa3c517ad8a5bdce2d6"} Oct 08 15:33:13 crc kubenswrapper[4624]: I1008 15:33:13.355663 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqvm5" event={"ID":"91d90576-b6ef-4ee0-8c0b-5e6762216f05","Type":"ContainerStarted","Data":"a7573d8566a727761b45060d00b8b426c5a2af255de54c917984cafea131d4d9"} Oct 08 15:33:15 crc kubenswrapper[4624]: I1008 15:33:15.374042 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqvm5" event={"ID":"91d90576-b6ef-4ee0-8c0b-5e6762216f05","Type":"ContainerStarted","Data":"95dd3bed3095d5f8dfd5bc3ec94824c1a111fc3865f1e08fda13c9b60de0fbaa"} Oct 08 15:33:16 crc kubenswrapper[4624]: I1008 15:33:16.384276 4624 generic.go:334] "Generic (PLEG): container finished" podID="91d90576-b6ef-4ee0-8c0b-5e6762216f05" containerID="95dd3bed3095d5f8dfd5bc3ec94824c1a111fc3865f1e08fda13c9b60de0fbaa" exitCode=0 Oct 08 15:33:16 crc kubenswrapper[4624]: I1008 15:33:16.384369 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqvm5" event={"ID":"91d90576-b6ef-4ee0-8c0b-5e6762216f05","Type":"ContainerDied","Data":"95dd3bed3095d5f8dfd5bc3ec94824c1a111fc3865f1e08fda13c9b60de0fbaa"} Oct 08 15:33:17 crc kubenswrapper[4624]: I1008 15:33:17.395742 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqvm5" event={"ID":"91d90576-b6ef-4ee0-8c0b-5e6762216f05","Type":"ContainerStarted","Data":"fd1010ce1e3239a0f532630b858315498f7603f5fbfb28d6b3aa9b35b1b3e249"} Oct 08 15:33:17 crc kubenswrapper[4624]: I1008 15:33:17.422945 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tqvm5" podStartSLOduration=2.941508195 podStartE2EDuration="6.41955166s" podCreationTimestamp="2025-10-08 15:33:11 +0000 UTC" firstStartedPulling="2025-10-08 15:33:13.359882925 +0000 UTC m=+4218.510818002" lastFinishedPulling="2025-10-08 15:33:16.83792639 +0000 UTC m=+4221.988861467" observedRunningTime="2025-10-08 15:33:17.412884848 +0000 UTC m=+4222.563819925" watchObservedRunningTime="2025-10-08 15:33:17.41955166 +0000 UTC m=+4222.570486737" Oct 08 15:33:21 crc kubenswrapper[4624]: I1008 15:33:21.977376 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tqvm5" Oct 08 15:33:21 crc kubenswrapper[4624]: I1008 15:33:21.978022 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tqvm5" Oct 08 15:33:23 crc kubenswrapper[4624]: I1008 15:33:23.057593 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tqvm5" podUID="91d90576-b6ef-4ee0-8c0b-5e6762216f05" containerName="registry-server" probeResult="failure" output=< Oct 08 15:33:23 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:33:23 crc kubenswrapper[4624]: > Oct 08 15:33:32 crc kubenswrapper[4624]: I1008 15:33:32.029748 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tqvm5" Oct 08 15:33:32 crc kubenswrapper[4624]: I1008 15:33:32.077617 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tqvm5" Oct 08 15:33:32 crc kubenswrapper[4624]: I1008 15:33:32.277574 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tqvm5"] Oct 08 15:33:33 crc kubenswrapper[4624]: I1008 15:33:33.534958 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tqvm5" podUID="91d90576-b6ef-4ee0-8c0b-5e6762216f05" containerName="registry-server" containerID="cri-o://fd1010ce1e3239a0f532630b858315498f7603f5fbfb28d6b3aa9b35b1b3e249" gracePeriod=2 Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.448894 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tqvm5" Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.479601 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91d90576-b6ef-4ee0-8c0b-5e6762216f05-utilities\") pod \"91d90576-b6ef-4ee0-8c0b-5e6762216f05\" (UID: \"91d90576-b6ef-4ee0-8c0b-5e6762216f05\") " Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.479674 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91d90576-b6ef-4ee0-8c0b-5e6762216f05-catalog-content\") pod \"91d90576-b6ef-4ee0-8c0b-5e6762216f05\" (UID: \"91d90576-b6ef-4ee0-8c0b-5e6762216f05\") " Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.479749 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6zqm\" (UniqueName: \"kubernetes.io/projected/91d90576-b6ef-4ee0-8c0b-5e6762216f05-kube-api-access-j6zqm\") pod \"91d90576-b6ef-4ee0-8c0b-5e6762216f05\" (UID: \"91d90576-b6ef-4ee0-8c0b-5e6762216f05\") " Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.483284 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91d90576-b6ef-4ee0-8c0b-5e6762216f05-utilities" (OuterVolumeSpecName: "utilities") pod "91d90576-b6ef-4ee0-8c0b-5e6762216f05" (UID: "91d90576-b6ef-4ee0-8c0b-5e6762216f05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.528223 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d90576-b6ef-4ee0-8c0b-5e6762216f05-kube-api-access-j6zqm" (OuterVolumeSpecName: "kube-api-access-j6zqm") pod "91d90576-b6ef-4ee0-8c0b-5e6762216f05" (UID: "91d90576-b6ef-4ee0-8c0b-5e6762216f05"). InnerVolumeSpecName "kube-api-access-j6zqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.553780 4624 generic.go:334] "Generic (PLEG): container finished" podID="91d90576-b6ef-4ee0-8c0b-5e6762216f05" containerID="fd1010ce1e3239a0f532630b858315498f7603f5fbfb28d6b3aa9b35b1b3e249" exitCode=0 Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.553826 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqvm5" event={"ID":"91d90576-b6ef-4ee0-8c0b-5e6762216f05","Type":"ContainerDied","Data":"fd1010ce1e3239a0f532630b858315498f7603f5fbfb28d6b3aa9b35b1b3e249"} Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.553859 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqvm5" event={"ID":"91d90576-b6ef-4ee0-8c0b-5e6762216f05","Type":"ContainerDied","Data":"a7573d8566a727761b45060d00b8b426c5a2af255de54c917984cafea131d4d9"} Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.553928 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tqvm5" Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.556627 4624 scope.go:117] "RemoveContainer" containerID="fd1010ce1e3239a0f532630b858315498f7603f5fbfb28d6b3aa9b35b1b3e249" Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.583741 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91d90576-b6ef-4ee0-8c0b-5e6762216f05-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.583771 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6zqm\" (UniqueName: \"kubernetes.io/projected/91d90576-b6ef-4ee0-8c0b-5e6762216f05-kube-api-access-j6zqm\") on node \"crc\" DevicePath \"\"" Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.620805 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91d90576-b6ef-4ee0-8c0b-5e6762216f05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91d90576-b6ef-4ee0-8c0b-5e6762216f05" (UID: "91d90576-b6ef-4ee0-8c0b-5e6762216f05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.627163 4624 scope.go:117] "RemoveContainer" containerID="95dd3bed3095d5f8dfd5bc3ec94824c1a111fc3865f1e08fda13c9b60de0fbaa" Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.647666 4624 scope.go:117] "RemoveContainer" containerID="5cd688c0e2ae229dd1775a5c4b7cffae8e262b7f05a2cfa3c517ad8a5bdce2d6" Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.686183 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91d90576-b6ef-4ee0-8c0b-5e6762216f05-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.709815 4624 scope.go:117] "RemoveContainer" containerID="fd1010ce1e3239a0f532630b858315498f7603f5fbfb28d6b3aa9b35b1b3e249" Oct 08 15:33:34 crc kubenswrapper[4624]: E1008 15:33:34.713590 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd1010ce1e3239a0f532630b858315498f7603f5fbfb28d6b3aa9b35b1b3e249\": container with ID starting with fd1010ce1e3239a0f532630b858315498f7603f5fbfb28d6b3aa9b35b1b3e249 not found: ID does not exist" containerID="fd1010ce1e3239a0f532630b858315498f7603f5fbfb28d6b3aa9b35b1b3e249" Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.714525 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd1010ce1e3239a0f532630b858315498f7603f5fbfb28d6b3aa9b35b1b3e249"} err="failed to get container status \"fd1010ce1e3239a0f532630b858315498f7603f5fbfb28d6b3aa9b35b1b3e249\": rpc error: code = NotFound desc = could not find container \"fd1010ce1e3239a0f532630b858315498f7603f5fbfb28d6b3aa9b35b1b3e249\": container with ID starting with fd1010ce1e3239a0f532630b858315498f7603f5fbfb28d6b3aa9b35b1b3e249 not found: ID does not exist" Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.714597 4624 scope.go:117] "RemoveContainer" containerID="95dd3bed3095d5f8dfd5bc3ec94824c1a111fc3865f1e08fda13c9b60de0fbaa" Oct 08 15:33:34 crc kubenswrapper[4624]: E1008 15:33:34.715189 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95dd3bed3095d5f8dfd5bc3ec94824c1a111fc3865f1e08fda13c9b60de0fbaa\": container with ID starting with 95dd3bed3095d5f8dfd5bc3ec94824c1a111fc3865f1e08fda13c9b60de0fbaa not found: ID does not exist" containerID="95dd3bed3095d5f8dfd5bc3ec94824c1a111fc3865f1e08fda13c9b60de0fbaa" Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.715230 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95dd3bed3095d5f8dfd5bc3ec94824c1a111fc3865f1e08fda13c9b60de0fbaa"} err="failed to get container status \"95dd3bed3095d5f8dfd5bc3ec94824c1a111fc3865f1e08fda13c9b60de0fbaa\": rpc error: code = NotFound desc = could not find container \"95dd3bed3095d5f8dfd5bc3ec94824c1a111fc3865f1e08fda13c9b60de0fbaa\": container with ID starting with 95dd3bed3095d5f8dfd5bc3ec94824c1a111fc3865f1e08fda13c9b60de0fbaa not found: ID does not exist" Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.715261 4624 scope.go:117] "RemoveContainer" containerID="5cd688c0e2ae229dd1775a5c4b7cffae8e262b7f05a2cfa3c517ad8a5bdce2d6" Oct 08 15:33:34 crc kubenswrapper[4624]: E1008 15:33:34.715729 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cd688c0e2ae229dd1775a5c4b7cffae8e262b7f05a2cfa3c517ad8a5bdce2d6\": container with ID starting with 5cd688c0e2ae229dd1775a5c4b7cffae8e262b7f05a2cfa3c517ad8a5bdce2d6 not found: ID does not exist" containerID="5cd688c0e2ae229dd1775a5c4b7cffae8e262b7f05a2cfa3c517ad8a5bdce2d6" Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.715786 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cd688c0e2ae229dd1775a5c4b7cffae8e262b7f05a2cfa3c517ad8a5bdce2d6"} err="failed to get container status \"5cd688c0e2ae229dd1775a5c4b7cffae8e262b7f05a2cfa3c517ad8a5bdce2d6\": rpc error: code = NotFound desc = could not find container \"5cd688c0e2ae229dd1775a5c4b7cffae8e262b7f05a2cfa3c517ad8a5bdce2d6\": container with ID starting with 5cd688c0e2ae229dd1775a5c4b7cffae8e262b7f05a2cfa3c517ad8a5bdce2d6 not found: ID does not exist" Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.914850 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tqvm5"] Oct 08 15:33:34 crc kubenswrapper[4624]: I1008 15:33:34.922699 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tqvm5"] Oct 08 15:33:35 crc kubenswrapper[4624]: I1008 15:33:35.476628 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91d90576-b6ef-4ee0-8c0b-5e6762216f05" path="/var/lib/kubelet/pods/91d90576-b6ef-4ee0-8c0b-5e6762216f05/volumes" Oct 08 15:34:30 crc kubenswrapper[4624]: I1008 15:34:30.076858 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:34:30 crc kubenswrapper[4624]: I1008 15:34:30.078171 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:35:00 crc kubenswrapper[4624]: I1008 15:35:00.076630 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:35:00 crc kubenswrapper[4624]: I1008 15:35:00.077161 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:35:30 crc kubenswrapper[4624]: I1008 15:35:30.077607 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:35:30 crc kubenswrapper[4624]: I1008 15:35:30.080677 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:35:30 crc kubenswrapper[4624]: I1008 15:35:30.080744 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 15:35:30 crc kubenswrapper[4624]: I1008 15:35:30.082259 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 15:35:30 crc kubenswrapper[4624]: I1008 15:35:30.089150 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" gracePeriod=600 Oct 08 15:35:30 crc kubenswrapper[4624]: I1008 15:35:30.585543 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" exitCode=0 Oct 08 15:35:30 crc kubenswrapper[4624]: I1008 15:35:30.585615 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b"} Oct 08 15:35:30 crc kubenswrapper[4624]: I1008 15:35:30.586017 4624 scope.go:117] "RemoveContainer" containerID="1f436fa71e649ae33432d5cc5ffd7f1d451c5e440a8bbbd964c9f92129f31ff9" Oct 08 15:35:30 crc kubenswrapper[4624]: E1008 15:35:30.808437 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:35:31 crc kubenswrapper[4624]: I1008 15:35:31.598133 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:35:31 crc kubenswrapper[4624]: E1008 15:35:31.598419 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:35:43 crc kubenswrapper[4624]: I1008 15:35:43.467255 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:35:43 crc kubenswrapper[4624]: E1008 15:35:43.468245 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:35:55 crc kubenswrapper[4624]: I1008 15:35:55.472286 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:35:55 crc kubenswrapper[4624]: E1008 15:35:55.472998 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:36:09 crc kubenswrapper[4624]: I1008 15:36:09.466433 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:36:09 crc kubenswrapper[4624]: E1008 15:36:09.467216 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:36:23 crc kubenswrapper[4624]: I1008 15:36:23.466625 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:36:23 crc kubenswrapper[4624]: E1008 15:36:23.468217 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:36:38 crc kubenswrapper[4624]: I1008 15:36:38.466120 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:36:38 crc kubenswrapper[4624]: E1008 15:36:38.467356 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:36:51 crc kubenswrapper[4624]: I1008 15:36:51.541017 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4qkrj"] Oct 08 15:36:51 crc kubenswrapper[4624]: E1008 15:36:51.547589 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d90576-b6ef-4ee0-8c0b-5e6762216f05" containerName="registry-server" Oct 08 15:36:51 crc kubenswrapper[4624]: I1008 15:36:51.548535 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d90576-b6ef-4ee0-8c0b-5e6762216f05" containerName="registry-server" Oct 08 15:36:51 crc kubenswrapper[4624]: E1008 15:36:51.548814 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d90576-b6ef-4ee0-8c0b-5e6762216f05" containerName="extract-utilities" Oct 08 15:36:51 crc kubenswrapper[4624]: I1008 15:36:51.548833 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d90576-b6ef-4ee0-8c0b-5e6762216f05" containerName="extract-utilities" Oct 08 15:36:51 crc kubenswrapper[4624]: E1008 15:36:51.548858 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d90576-b6ef-4ee0-8c0b-5e6762216f05" containerName="extract-content" Oct 08 15:36:51 crc kubenswrapper[4624]: I1008 15:36:51.548870 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d90576-b6ef-4ee0-8c0b-5e6762216f05" containerName="extract-content" Oct 08 15:36:51 crc kubenswrapper[4624]: I1008 15:36:51.550955 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d90576-b6ef-4ee0-8c0b-5e6762216f05" containerName="registry-server" Oct 08 15:36:51 crc kubenswrapper[4624]: I1008 15:36:51.559726 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qkrj" Oct 08 15:36:51 crc kubenswrapper[4624]: I1008 15:36:51.658707 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qkrj"] Oct 08 15:36:51 crc kubenswrapper[4624]: I1008 15:36:51.707221 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlhf7\" (UniqueName: \"kubernetes.io/projected/3a1eaac6-1a43-4506-aa8a-838c99a06399-kube-api-access-zlhf7\") pod \"redhat-marketplace-4qkrj\" (UID: \"3a1eaac6-1a43-4506-aa8a-838c99a06399\") " pod="openshift-marketplace/redhat-marketplace-4qkrj" Oct 08 15:36:51 crc kubenswrapper[4624]: I1008 15:36:51.707403 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a1eaac6-1a43-4506-aa8a-838c99a06399-utilities\") pod \"redhat-marketplace-4qkrj\" (UID: \"3a1eaac6-1a43-4506-aa8a-838c99a06399\") " pod="openshift-marketplace/redhat-marketplace-4qkrj" Oct 08 15:36:51 crc kubenswrapper[4624]: I1008 15:36:51.707491 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a1eaac6-1a43-4506-aa8a-838c99a06399-catalog-content\") pod \"redhat-marketplace-4qkrj\" (UID: \"3a1eaac6-1a43-4506-aa8a-838c99a06399\") " pod="openshift-marketplace/redhat-marketplace-4qkrj" Oct 08 15:36:51 crc kubenswrapper[4624]: I1008 15:36:51.810406 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlhf7\" (UniqueName: \"kubernetes.io/projected/3a1eaac6-1a43-4506-aa8a-838c99a06399-kube-api-access-zlhf7\") pod \"redhat-marketplace-4qkrj\" (UID: \"3a1eaac6-1a43-4506-aa8a-838c99a06399\") " pod="openshift-marketplace/redhat-marketplace-4qkrj" Oct 08 15:36:51 crc kubenswrapper[4624]: I1008 15:36:51.811105 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a1eaac6-1a43-4506-aa8a-838c99a06399-utilities\") pod \"redhat-marketplace-4qkrj\" (UID: \"3a1eaac6-1a43-4506-aa8a-838c99a06399\") " pod="openshift-marketplace/redhat-marketplace-4qkrj" Oct 08 15:36:51 crc kubenswrapper[4624]: I1008 15:36:51.811188 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a1eaac6-1a43-4506-aa8a-838c99a06399-catalog-content\") pod \"redhat-marketplace-4qkrj\" (UID: \"3a1eaac6-1a43-4506-aa8a-838c99a06399\") " pod="openshift-marketplace/redhat-marketplace-4qkrj" Oct 08 15:36:51 crc kubenswrapper[4624]: I1008 15:36:51.813894 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a1eaac6-1a43-4506-aa8a-838c99a06399-utilities\") pod \"redhat-marketplace-4qkrj\" (UID: \"3a1eaac6-1a43-4506-aa8a-838c99a06399\") " pod="openshift-marketplace/redhat-marketplace-4qkrj" Oct 08 15:36:51 crc kubenswrapper[4624]: I1008 15:36:51.813839 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a1eaac6-1a43-4506-aa8a-838c99a06399-catalog-content\") pod \"redhat-marketplace-4qkrj\" (UID: \"3a1eaac6-1a43-4506-aa8a-838c99a06399\") " pod="openshift-marketplace/redhat-marketplace-4qkrj" Oct 08 15:36:51 crc kubenswrapper[4624]: I1008 15:36:51.858567 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlhf7\" (UniqueName: \"kubernetes.io/projected/3a1eaac6-1a43-4506-aa8a-838c99a06399-kube-api-access-zlhf7\") pod \"redhat-marketplace-4qkrj\" (UID: \"3a1eaac6-1a43-4506-aa8a-838c99a06399\") " pod="openshift-marketplace/redhat-marketplace-4qkrj" Oct 08 15:36:51 crc kubenswrapper[4624]: I1008 15:36:51.894263 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qkrj" Oct 08 15:36:53 crc kubenswrapper[4624]: I1008 15:36:53.217011 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qkrj"] Oct 08 15:36:53 crc kubenswrapper[4624]: I1008 15:36:53.376492 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qkrj" event={"ID":"3a1eaac6-1a43-4506-aa8a-838c99a06399","Type":"ContainerStarted","Data":"3b529fa915162f75b2a5b0eb5a5a121550362e6c043d631bf940b2d18115ffba"} Oct 08 15:36:53 crc kubenswrapper[4624]: I1008 15:36:53.465770 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:36:53 crc kubenswrapper[4624]: E1008 15:36:53.466032 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:36:54 crc kubenswrapper[4624]: I1008 15:36:54.388129 4624 generic.go:334] "Generic (PLEG): container finished" podID="3a1eaac6-1a43-4506-aa8a-838c99a06399" containerID="997b0283639e016c2d224bf76ef4fefbbb06597973869bb45cd099073cc9d362" exitCode=0 Oct 08 15:36:54 crc kubenswrapper[4624]: I1008 15:36:54.388358 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qkrj" event={"ID":"3a1eaac6-1a43-4506-aa8a-838c99a06399","Type":"ContainerDied","Data":"997b0283639e016c2d224bf76ef4fefbbb06597973869bb45cd099073cc9d362"} Oct 08 15:36:54 crc kubenswrapper[4624]: I1008 15:36:54.403661 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 15:36:55 crc kubenswrapper[4624]: I1008 15:36:55.398292 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qkrj" event={"ID":"3a1eaac6-1a43-4506-aa8a-838c99a06399","Type":"ContainerStarted","Data":"ab840be0f3479955a312d126cd93ed38ebaf1682ba4109e48c44619fb0ae5019"} Oct 08 15:36:56 crc kubenswrapper[4624]: I1008 15:36:56.413627 4624 generic.go:334] "Generic (PLEG): container finished" podID="3a1eaac6-1a43-4506-aa8a-838c99a06399" containerID="ab840be0f3479955a312d126cd93ed38ebaf1682ba4109e48c44619fb0ae5019" exitCode=0 Oct 08 15:36:56 crc kubenswrapper[4624]: I1008 15:36:56.413911 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qkrj" event={"ID":"3a1eaac6-1a43-4506-aa8a-838c99a06399","Type":"ContainerDied","Data":"ab840be0f3479955a312d126cd93ed38ebaf1682ba4109e48c44619fb0ae5019"} Oct 08 15:36:57 crc kubenswrapper[4624]: I1008 15:36:57.434170 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qkrj" event={"ID":"3a1eaac6-1a43-4506-aa8a-838c99a06399","Type":"ContainerStarted","Data":"840bf420cee924b2dd94c18c4d6d2f79babfc687b5537a147b95aa214fb0bc79"} Oct 08 15:37:01 crc kubenswrapper[4624]: I1008 15:37:01.896514 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4qkrj" Oct 08 15:37:01 crc kubenswrapper[4624]: I1008 15:37:01.899239 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4qkrj" Oct 08 15:37:01 crc kubenswrapper[4624]: I1008 15:37:01.959656 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4qkrj" Oct 08 15:37:01 crc kubenswrapper[4624]: I1008 15:37:01.991059 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4qkrj" podStartSLOduration=8.554312364 podStartE2EDuration="10.989332644s" podCreationTimestamp="2025-10-08 15:36:51 +0000 UTC" firstStartedPulling="2025-10-08 15:36:54.396241533 +0000 UTC m=+4439.547176610" lastFinishedPulling="2025-10-08 15:36:56.831261813 +0000 UTC m=+4441.982196890" observedRunningTime="2025-10-08 15:36:57.463688522 +0000 UTC m=+4442.614623619" watchObservedRunningTime="2025-10-08 15:37:01.989332644 +0000 UTC m=+4447.140267721" Oct 08 15:37:02 crc kubenswrapper[4624]: I1008 15:37:02.541344 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4qkrj" Oct 08 15:37:02 crc kubenswrapper[4624]: I1008 15:37:02.610251 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qkrj"] Oct 08 15:37:04 crc kubenswrapper[4624]: I1008 15:37:04.502830 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4qkrj" podUID="3a1eaac6-1a43-4506-aa8a-838c99a06399" containerName="registry-server" containerID="cri-o://840bf420cee924b2dd94c18c4d6d2f79babfc687b5537a147b95aa214fb0bc79" gracePeriod=2 Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.172997 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qkrj" Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.296837 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a1eaac6-1a43-4506-aa8a-838c99a06399-utilities\") pod \"3a1eaac6-1a43-4506-aa8a-838c99a06399\" (UID: \"3a1eaac6-1a43-4506-aa8a-838c99a06399\") " Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.297219 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a1eaac6-1a43-4506-aa8a-838c99a06399-catalog-content\") pod \"3a1eaac6-1a43-4506-aa8a-838c99a06399\" (UID: \"3a1eaac6-1a43-4506-aa8a-838c99a06399\") " Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.297282 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlhf7\" (UniqueName: \"kubernetes.io/projected/3a1eaac6-1a43-4506-aa8a-838c99a06399-kube-api-access-zlhf7\") pod \"3a1eaac6-1a43-4506-aa8a-838c99a06399\" (UID: \"3a1eaac6-1a43-4506-aa8a-838c99a06399\") " Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.300944 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a1eaac6-1a43-4506-aa8a-838c99a06399-utilities" (OuterVolumeSpecName: "utilities") pod "3a1eaac6-1a43-4506-aa8a-838c99a06399" (UID: "3a1eaac6-1a43-4506-aa8a-838c99a06399"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.313023 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a1eaac6-1a43-4506-aa8a-838c99a06399-kube-api-access-zlhf7" (OuterVolumeSpecName: "kube-api-access-zlhf7") pod "3a1eaac6-1a43-4506-aa8a-838c99a06399" (UID: "3a1eaac6-1a43-4506-aa8a-838c99a06399"). InnerVolumeSpecName "kube-api-access-zlhf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.316439 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a1eaac6-1a43-4506-aa8a-838c99a06399-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a1eaac6-1a43-4506-aa8a-838c99a06399" (UID: "3a1eaac6-1a43-4506-aa8a-838c99a06399"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.400258 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a1eaac6-1a43-4506-aa8a-838c99a06399-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.400291 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a1eaac6-1a43-4506-aa8a-838c99a06399-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.400302 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlhf7\" (UniqueName: \"kubernetes.io/projected/3a1eaac6-1a43-4506-aa8a-838c99a06399-kube-api-access-zlhf7\") on node \"crc\" DevicePath \"\"" Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.515980 4624 generic.go:334] "Generic (PLEG): container finished" podID="3a1eaac6-1a43-4506-aa8a-838c99a06399" containerID="840bf420cee924b2dd94c18c4d6d2f79babfc687b5537a147b95aa214fb0bc79" exitCode=0 Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.516022 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qkrj" event={"ID":"3a1eaac6-1a43-4506-aa8a-838c99a06399","Type":"ContainerDied","Data":"840bf420cee924b2dd94c18c4d6d2f79babfc687b5537a147b95aa214fb0bc79"} Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.516050 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qkrj" event={"ID":"3a1eaac6-1a43-4506-aa8a-838c99a06399","Type":"ContainerDied","Data":"3b529fa915162f75b2a5b0eb5a5a121550362e6c043d631bf940b2d18115ffba"} Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.516069 4624 scope.go:117] "RemoveContainer" containerID="840bf420cee924b2dd94c18c4d6d2f79babfc687b5537a147b95aa214fb0bc79" Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.516097 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qkrj" Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.554105 4624 scope.go:117] "RemoveContainer" containerID="ab840be0f3479955a312d126cd93ed38ebaf1682ba4109e48c44619fb0ae5019" Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.558382 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qkrj"] Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.572409 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qkrj"] Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.582848 4624 scope.go:117] "RemoveContainer" containerID="997b0283639e016c2d224bf76ef4fefbbb06597973869bb45cd099073cc9d362" Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.636029 4624 scope.go:117] "RemoveContainer" containerID="840bf420cee924b2dd94c18c4d6d2f79babfc687b5537a147b95aa214fb0bc79" Oct 08 15:37:05 crc kubenswrapper[4624]: E1008 15:37:05.641046 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"840bf420cee924b2dd94c18c4d6d2f79babfc687b5537a147b95aa214fb0bc79\": container with ID starting with 840bf420cee924b2dd94c18c4d6d2f79babfc687b5537a147b95aa214fb0bc79 not found: ID does not exist" containerID="840bf420cee924b2dd94c18c4d6d2f79babfc687b5537a147b95aa214fb0bc79" Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.641914 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"840bf420cee924b2dd94c18c4d6d2f79babfc687b5537a147b95aa214fb0bc79"} err="failed to get container status \"840bf420cee924b2dd94c18c4d6d2f79babfc687b5537a147b95aa214fb0bc79\": rpc error: code = NotFound desc = could not find container \"840bf420cee924b2dd94c18c4d6d2f79babfc687b5537a147b95aa214fb0bc79\": container with ID starting with 840bf420cee924b2dd94c18c4d6d2f79babfc687b5537a147b95aa214fb0bc79 not found: ID does not exist" Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.641964 4624 scope.go:117] "RemoveContainer" containerID="ab840be0f3479955a312d126cd93ed38ebaf1682ba4109e48c44619fb0ae5019" Oct 08 15:37:05 crc kubenswrapper[4624]: E1008 15:37:05.643674 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab840be0f3479955a312d126cd93ed38ebaf1682ba4109e48c44619fb0ae5019\": container with ID starting with ab840be0f3479955a312d126cd93ed38ebaf1682ba4109e48c44619fb0ae5019 not found: ID does not exist" containerID="ab840be0f3479955a312d126cd93ed38ebaf1682ba4109e48c44619fb0ae5019" Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.643715 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab840be0f3479955a312d126cd93ed38ebaf1682ba4109e48c44619fb0ae5019"} err="failed to get container status \"ab840be0f3479955a312d126cd93ed38ebaf1682ba4109e48c44619fb0ae5019\": rpc error: code = NotFound desc = could not find container \"ab840be0f3479955a312d126cd93ed38ebaf1682ba4109e48c44619fb0ae5019\": container with ID starting with ab840be0f3479955a312d126cd93ed38ebaf1682ba4109e48c44619fb0ae5019 not found: ID does not exist" Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.643745 4624 scope.go:117] "RemoveContainer" containerID="997b0283639e016c2d224bf76ef4fefbbb06597973869bb45cd099073cc9d362" Oct 08 15:37:05 crc kubenswrapper[4624]: E1008 15:37:05.644193 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"997b0283639e016c2d224bf76ef4fefbbb06597973869bb45cd099073cc9d362\": container with ID starting with 997b0283639e016c2d224bf76ef4fefbbb06597973869bb45cd099073cc9d362 not found: ID does not exist" containerID="997b0283639e016c2d224bf76ef4fefbbb06597973869bb45cd099073cc9d362" Oct 08 15:37:05 crc kubenswrapper[4624]: I1008 15:37:05.644215 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"997b0283639e016c2d224bf76ef4fefbbb06597973869bb45cd099073cc9d362"} err="failed to get container status \"997b0283639e016c2d224bf76ef4fefbbb06597973869bb45cd099073cc9d362\": rpc error: code = NotFound desc = could not find container \"997b0283639e016c2d224bf76ef4fefbbb06597973869bb45cd099073cc9d362\": container with ID starting with 997b0283639e016c2d224bf76ef4fefbbb06597973869bb45cd099073cc9d362 not found: ID does not exist" Oct 08 15:37:07 crc kubenswrapper[4624]: I1008 15:37:07.466037 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:37:07 crc kubenswrapper[4624]: E1008 15:37:07.466946 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:37:07 crc kubenswrapper[4624]: I1008 15:37:07.478694 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a1eaac6-1a43-4506-aa8a-838c99a06399" path="/var/lib/kubelet/pods/3a1eaac6-1a43-4506-aa8a-838c99a06399/volumes" Oct 08 15:37:19 crc kubenswrapper[4624]: I1008 15:37:19.467141 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:37:19 crc kubenswrapper[4624]: E1008 15:37:19.468160 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:37:30 crc kubenswrapper[4624]: I1008 15:37:30.466137 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:37:30 crc kubenswrapper[4624]: E1008 15:37:30.467086 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:37:41 crc kubenswrapper[4624]: I1008 15:37:41.466447 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:37:41 crc kubenswrapper[4624]: E1008 15:37:41.467628 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:37:53 crc kubenswrapper[4624]: I1008 15:37:53.466349 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:37:53 crc kubenswrapper[4624]: E1008 15:37:53.467256 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:38:04 crc kubenswrapper[4624]: I1008 15:38:04.466771 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:38:04 crc kubenswrapper[4624]: E1008 15:38:04.467621 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:38:16 crc kubenswrapper[4624]: I1008 15:38:16.466495 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:38:16 crc kubenswrapper[4624]: E1008 15:38:16.467503 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:38:27 crc kubenswrapper[4624]: I1008 15:38:27.465587 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:38:27 crc kubenswrapper[4624]: E1008 15:38:27.466347 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:38:42 crc kubenswrapper[4624]: I1008 15:38:42.465943 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:38:42 crc kubenswrapper[4624]: E1008 15:38:42.466872 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:38:53 crc kubenswrapper[4624]: I1008 15:38:53.466496 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:38:53 crc kubenswrapper[4624]: E1008 15:38:53.467371 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:39:08 crc kubenswrapper[4624]: I1008 15:39:08.465775 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:39:08 crc kubenswrapper[4624]: E1008 15:39:08.466418 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:39:21 crc kubenswrapper[4624]: I1008 15:39:21.466369 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:39:21 crc kubenswrapper[4624]: E1008 15:39:21.467304 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:39:33 crc kubenswrapper[4624]: I1008 15:39:33.466265 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:39:33 crc kubenswrapper[4624]: E1008 15:39:33.467240 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:39:44 crc kubenswrapper[4624]: I1008 15:39:44.466102 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:39:44 crc kubenswrapper[4624]: E1008 15:39:44.467054 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:39:55 crc kubenswrapper[4624]: I1008 15:39:55.477539 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:39:55 crc kubenswrapper[4624]: E1008 15:39:55.478378 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:40:10 crc kubenswrapper[4624]: I1008 15:40:10.467215 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:40:10 crc kubenswrapper[4624]: E1008 15:40:10.468337 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:40:22 crc kubenswrapper[4624]: I1008 15:40:22.810866 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fnmx5"] Oct 08 15:40:22 crc kubenswrapper[4624]: E1008 15:40:22.811924 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1eaac6-1a43-4506-aa8a-838c99a06399" containerName="extract-utilities" Oct 08 15:40:22 crc kubenswrapper[4624]: I1008 15:40:22.811942 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1eaac6-1a43-4506-aa8a-838c99a06399" containerName="extract-utilities" Oct 08 15:40:22 crc kubenswrapper[4624]: E1008 15:40:22.811981 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1eaac6-1a43-4506-aa8a-838c99a06399" containerName="extract-content" Oct 08 15:40:22 crc kubenswrapper[4624]: I1008 15:40:22.811991 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1eaac6-1a43-4506-aa8a-838c99a06399" containerName="extract-content" Oct 08 15:40:22 crc kubenswrapper[4624]: E1008 15:40:22.812007 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1eaac6-1a43-4506-aa8a-838c99a06399" containerName="registry-server" Oct 08 15:40:22 crc kubenswrapper[4624]: I1008 15:40:22.812016 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1eaac6-1a43-4506-aa8a-838c99a06399" containerName="registry-server" Oct 08 15:40:22 crc kubenswrapper[4624]: I1008 15:40:22.812271 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a1eaac6-1a43-4506-aa8a-838c99a06399" containerName="registry-server" Oct 08 15:40:22 crc kubenswrapper[4624]: I1008 15:40:22.813799 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnmx5" Oct 08 15:40:22 crc kubenswrapper[4624]: I1008 15:40:22.857250 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fnmx5"] Oct 08 15:40:22 crc kubenswrapper[4624]: I1008 15:40:22.987186 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhxsw\" (UniqueName: \"kubernetes.io/projected/9fe20fef-fa27-4437-bf80-ea7b72c844dd-kube-api-access-dhxsw\") pod \"community-operators-fnmx5\" (UID: \"9fe20fef-fa27-4437-bf80-ea7b72c844dd\") " pod="openshift-marketplace/community-operators-fnmx5" Oct 08 15:40:22 crc kubenswrapper[4624]: I1008 15:40:22.987295 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fe20fef-fa27-4437-bf80-ea7b72c844dd-catalog-content\") pod \"community-operators-fnmx5\" (UID: \"9fe20fef-fa27-4437-bf80-ea7b72c844dd\") " pod="openshift-marketplace/community-operators-fnmx5" Oct 08 15:40:22 crc kubenswrapper[4624]: I1008 15:40:22.987321 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fe20fef-fa27-4437-bf80-ea7b72c844dd-utilities\") pod \"community-operators-fnmx5\" (UID: \"9fe20fef-fa27-4437-bf80-ea7b72c844dd\") " pod="openshift-marketplace/community-operators-fnmx5" Oct 08 15:40:23 crc kubenswrapper[4624]: I1008 15:40:23.089121 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fe20fef-fa27-4437-bf80-ea7b72c844dd-utilities\") pod \"community-operators-fnmx5\" (UID: \"9fe20fef-fa27-4437-bf80-ea7b72c844dd\") " pod="openshift-marketplace/community-operators-fnmx5" Oct 08 15:40:23 crc kubenswrapper[4624]: I1008 15:40:23.089352 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhxsw\" (UniqueName: \"kubernetes.io/projected/9fe20fef-fa27-4437-bf80-ea7b72c844dd-kube-api-access-dhxsw\") pod \"community-operators-fnmx5\" (UID: \"9fe20fef-fa27-4437-bf80-ea7b72c844dd\") " pod="openshift-marketplace/community-operators-fnmx5" Oct 08 15:40:23 crc kubenswrapper[4624]: I1008 15:40:23.089436 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fe20fef-fa27-4437-bf80-ea7b72c844dd-catalog-content\") pod \"community-operators-fnmx5\" (UID: \"9fe20fef-fa27-4437-bf80-ea7b72c844dd\") " pod="openshift-marketplace/community-operators-fnmx5" Oct 08 15:40:23 crc kubenswrapper[4624]: I1008 15:40:23.089698 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fe20fef-fa27-4437-bf80-ea7b72c844dd-utilities\") pod \"community-operators-fnmx5\" (UID: \"9fe20fef-fa27-4437-bf80-ea7b72c844dd\") " pod="openshift-marketplace/community-operators-fnmx5" Oct 08 15:40:23 crc kubenswrapper[4624]: I1008 15:40:23.089799 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fe20fef-fa27-4437-bf80-ea7b72c844dd-catalog-content\") pod \"community-operators-fnmx5\" (UID: \"9fe20fef-fa27-4437-bf80-ea7b72c844dd\") " pod="openshift-marketplace/community-operators-fnmx5" Oct 08 15:40:23 crc kubenswrapper[4624]: I1008 15:40:23.109190 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhxsw\" (UniqueName: \"kubernetes.io/projected/9fe20fef-fa27-4437-bf80-ea7b72c844dd-kube-api-access-dhxsw\") pod \"community-operators-fnmx5\" (UID: \"9fe20fef-fa27-4437-bf80-ea7b72c844dd\") " pod="openshift-marketplace/community-operators-fnmx5" Oct 08 15:40:23 crc kubenswrapper[4624]: I1008 15:40:23.148804 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnmx5" Oct 08 15:40:23 crc kubenswrapper[4624]: I1008 15:40:23.619375 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fnmx5"] Oct 08 15:40:24 crc kubenswrapper[4624]: I1008 15:40:24.314986 4624 generic.go:334] "Generic (PLEG): container finished" podID="9fe20fef-fa27-4437-bf80-ea7b72c844dd" containerID="3a8c185bf40ce735893f72dc345d848333cf9025f7c8ee19e5fe6b80082cd748" exitCode=0 Oct 08 15:40:24 crc kubenswrapper[4624]: I1008 15:40:24.315143 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnmx5" event={"ID":"9fe20fef-fa27-4437-bf80-ea7b72c844dd","Type":"ContainerDied","Data":"3a8c185bf40ce735893f72dc345d848333cf9025f7c8ee19e5fe6b80082cd748"} Oct 08 15:40:24 crc kubenswrapper[4624]: I1008 15:40:24.315407 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnmx5" event={"ID":"9fe20fef-fa27-4437-bf80-ea7b72c844dd","Type":"ContainerStarted","Data":"ac8b46474691cd7fd390d6f336eac3308a28eada7bc59b85117c68f94e48f341"} Oct 08 15:40:25 crc kubenswrapper[4624]: I1008 15:40:25.473520 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:40:25 crc kubenswrapper[4624]: E1008 15:40:25.474075 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:40:26 crc kubenswrapper[4624]: I1008 15:40:26.334579 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnmx5" event={"ID":"9fe20fef-fa27-4437-bf80-ea7b72c844dd","Type":"ContainerStarted","Data":"a4a3ce37fef66275314e96a6a67350acc13b2eb06fb45b84acc32acd21dad048"} Oct 08 15:40:28 crc kubenswrapper[4624]: I1008 15:40:28.355700 4624 generic.go:334] "Generic (PLEG): container finished" podID="9fe20fef-fa27-4437-bf80-ea7b72c844dd" containerID="a4a3ce37fef66275314e96a6a67350acc13b2eb06fb45b84acc32acd21dad048" exitCode=0 Oct 08 15:40:28 crc kubenswrapper[4624]: I1008 15:40:28.356052 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnmx5" event={"ID":"9fe20fef-fa27-4437-bf80-ea7b72c844dd","Type":"ContainerDied","Data":"a4a3ce37fef66275314e96a6a67350acc13b2eb06fb45b84acc32acd21dad048"} Oct 08 15:40:29 crc kubenswrapper[4624]: I1008 15:40:29.365459 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnmx5" event={"ID":"9fe20fef-fa27-4437-bf80-ea7b72c844dd","Type":"ContainerStarted","Data":"ba30b345728c36cef696602b2c8e56a29cf2c08160f5993bbc7a4b2deb025247"} Oct 08 15:40:29 crc kubenswrapper[4624]: I1008 15:40:29.393295 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fnmx5" podStartSLOduration=2.866147407 podStartE2EDuration="7.393273004s" podCreationTimestamp="2025-10-08 15:40:22 +0000 UTC" firstStartedPulling="2025-10-08 15:40:24.318121273 +0000 UTC m=+4649.469056350" lastFinishedPulling="2025-10-08 15:40:28.84524687 +0000 UTC m=+4653.996181947" observedRunningTime="2025-10-08 15:40:29.384577208 +0000 UTC m=+4654.535512295" watchObservedRunningTime="2025-10-08 15:40:29.393273004 +0000 UTC m=+4654.544208081" Oct 08 15:40:33 crc kubenswrapper[4624]: I1008 15:40:33.149505 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fnmx5" Oct 08 15:40:33 crc kubenswrapper[4624]: I1008 15:40:33.150052 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fnmx5" Oct 08 15:40:34 crc kubenswrapper[4624]: I1008 15:40:34.197382 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fnmx5" podUID="9fe20fef-fa27-4437-bf80-ea7b72c844dd" containerName="registry-server" probeResult="failure" output=< Oct 08 15:40:34 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:40:34 crc kubenswrapper[4624]: > Oct 08 15:40:40 crc kubenswrapper[4624]: I1008 15:40:40.467694 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:40:41 crc kubenswrapper[4624]: I1008 15:40:41.478693 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"9bd0a8d3164fd7a83510f2ba7ae6c8925f3828524cdd1fe2eacfa02b05ada450"} Oct 08 15:40:43 crc kubenswrapper[4624]: I1008 15:40:43.203232 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fnmx5" Oct 08 15:40:43 crc kubenswrapper[4624]: I1008 15:40:43.262584 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fnmx5" Oct 08 15:40:43 crc kubenswrapper[4624]: I1008 15:40:43.441201 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fnmx5"] Oct 08 15:40:44 crc kubenswrapper[4624]: I1008 15:40:44.506141 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fnmx5" podUID="9fe20fef-fa27-4437-bf80-ea7b72c844dd" containerName="registry-server" containerID="cri-o://ba30b345728c36cef696602b2c8e56a29cf2c08160f5993bbc7a4b2deb025247" gracePeriod=2 Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.008866 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnmx5" Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.025294 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fe20fef-fa27-4437-bf80-ea7b72c844dd-utilities\") pod \"9fe20fef-fa27-4437-bf80-ea7b72c844dd\" (UID: \"9fe20fef-fa27-4437-bf80-ea7b72c844dd\") " Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.025383 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhxsw\" (UniqueName: \"kubernetes.io/projected/9fe20fef-fa27-4437-bf80-ea7b72c844dd-kube-api-access-dhxsw\") pod \"9fe20fef-fa27-4437-bf80-ea7b72c844dd\" (UID: \"9fe20fef-fa27-4437-bf80-ea7b72c844dd\") " Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.025534 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fe20fef-fa27-4437-bf80-ea7b72c844dd-catalog-content\") pod \"9fe20fef-fa27-4437-bf80-ea7b72c844dd\" (UID: \"9fe20fef-fa27-4437-bf80-ea7b72c844dd\") " Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.026026 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fe20fef-fa27-4437-bf80-ea7b72c844dd-utilities" (OuterVolumeSpecName: "utilities") pod "9fe20fef-fa27-4437-bf80-ea7b72c844dd" (UID: "9fe20fef-fa27-4437-bf80-ea7b72c844dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.045440 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fe20fef-fa27-4437-bf80-ea7b72c844dd-kube-api-access-dhxsw" (OuterVolumeSpecName: "kube-api-access-dhxsw") pod "9fe20fef-fa27-4437-bf80-ea7b72c844dd" (UID: "9fe20fef-fa27-4437-bf80-ea7b72c844dd"). InnerVolumeSpecName "kube-api-access-dhxsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.130595 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fe20fef-fa27-4437-bf80-ea7b72c844dd-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.131305 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhxsw\" (UniqueName: \"kubernetes.io/projected/9fe20fef-fa27-4437-bf80-ea7b72c844dd-kube-api-access-dhxsw\") on node \"crc\" DevicePath \"\"" Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.136996 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fe20fef-fa27-4437-bf80-ea7b72c844dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9fe20fef-fa27-4437-bf80-ea7b72c844dd" (UID: "9fe20fef-fa27-4437-bf80-ea7b72c844dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.232612 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fe20fef-fa27-4437-bf80-ea7b72c844dd-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.515931 4624 generic.go:334] "Generic (PLEG): container finished" podID="9fe20fef-fa27-4437-bf80-ea7b72c844dd" containerID="ba30b345728c36cef696602b2c8e56a29cf2c08160f5993bbc7a4b2deb025247" exitCode=0 Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.515970 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnmx5" event={"ID":"9fe20fef-fa27-4437-bf80-ea7b72c844dd","Type":"ContainerDied","Data":"ba30b345728c36cef696602b2c8e56a29cf2c08160f5993bbc7a4b2deb025247"} Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.516024 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnmx5" Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.516032 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnmx5" event={"ID":"9fe20fef-fa27-4437-bf80-ea7b72c844dd","Type":"ContainerDied","Data":"ac8b46474691cd7fd390d6f336eac3308a28eada7bc59b85117c68f94e48f341"} Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.516059 4624 scope.go:117] "RemoveContainer" containerID="ba30b345728c36cef696602b2c8e56a29cf2c08160f5993bbc7a4b2deb025247" Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.541518 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fnmx5"] Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.546668 4624 scope.go:117] "RemoveContainer" containerID="a4a3ce37fef66275314e96a6a67350acc13b2eb06fb45b84acc32acd21dad048" Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.550567 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fnmx5"] Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.573187 4624 scope.go:117] "RemoveContainer" containerID="3a8c185bf40ce735893f72dc345d848333cf9025f7c8ee19e5fe6b80082cd748" Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.607630 4624 scope.go:117] "RemoveContainer" containerID="ba30b345728c36cef696602b2c8e56a29cf2c08160f5993bbc7a4b2deb025247" Oct 08 15:40:45 crc kubenswrapper[4624]: E1008 15:40:45.608170 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba30b345728c36cef696602b2c8e56a29cf2c08160f5993bbc7a4b2deb025247\": container with ID starting with ba30b345728c36cef696602b2c8e56a29cf2c08160f5993bbc7a4b2deb025247 not found: ID does not exist" containerID="ba30b345728c36cef696602b2c8e56a29cf2c08160f5993bbc7a4b2deb025247" Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.608210 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba30b345728c36cef696602b2c8e56a29cf2c08160f5993bbc7a4b2deb025247"} err="failed to get container status \"ba30b345728c36cef696602b2c8e56a29cf2c08160f5993bbc7a4b2deb025247\": rpc error: code = NotFound desc = could not find container \"ba30b345728c36cef696602b2c8e56a29cf2c08160f5993bbc7a4b2deb025247\": container with ID starting with ba30b345728c36cef696602b2c8e56a29cf2c08160f5993bbc7a4b2deb025247 not found: ID does not exist" Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.608236 4624 scope.go:117] "RemoveContainer" containerID="a4a3ce37fef66275314e96a6a67350acc13b2eb06fb45b84acc32acd21dad048" Oct 08 15:40:45 crc kubenswrapper[4624]: E1008 15:40:45.608593 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4a3ce37fef66275314e96a6a67350acc13b2eb06fb45b84acc32acd21dad048\": container with ID starting with a4a3ce37fef66275314e96a6a67350acc13b2eb06fb45b84acc32acd21dad048 not found: ID does not exist" containerID="a4a3ce37fef66275314e96a6a67350acc13b2eb06fb45b84acc32acd21dad048" Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.608625 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4a3ce37fef66275314e96a6a67350acc13b2eb06fb45b84acc32acd21dad048"} err="failed to get container status \"a4a3ce37fef66275314e96a6a67350acc13b2eb06fb45b84acc32acd21dad048\": rpc error: code = NotFound desc = could not find container \"a4a3ce37fef66275314e96a6a67350acc13b2eb06fb45b84acc32acd21dad048\": container with ID starting with a4a3ce37fef66275314e96a6a67350acc13b2eb06fb45b84acc32acd21dad048 not found: ID does not exist" Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.608669 4624 scope.go:117] "RemoveContainer" containerID="3a8c185bf40ce735893f72dc345d848333cf9025f7c8ee19e5fe6b80082cd748" Oct 08 15:40:45 crc kubenswrapper[4624]: E1008 15:40:45.608989 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a8c185bf40ce735893f72dc345d848333cf9025f7c8ee19e5fe6b80082cd748\": container with ID starting with 3a8c185bf40ce735893f72dc345d848333cf9025f7c8ee19e5fe6b80082cd748 not found: ID does not exist" containerID="3a8c185bf40ce735893f72dc345d848333cf9025f7c8ee19e5fe6b80082cd748" Oct 08 15:40:45 crc kubenswrapper[4624]: I1008 15:40:45.609031 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a8c185bf40ce735893f72dc345d848333cf9025f7c8ee19e5fe6b80082cd748"} err="failed to get container status \"3a8c185bf40ce735893f72dc345d848333cf9025f7c8ee19e5fe6b80082cd748\": rpc error: code = NotFound desc = could not find container \"3a8c185bf40ce735893f72dc345d848333cf9025f7c8ee19e5fe6b80082cd748\": container with ID starting with 3a8c185bf40ce735893f72dc345d848333cf9025f7c8ee19e5fe6b80082cd748 not found: ID does not exist" Oct 08 15:40:47 crc kubenswrapper[4624]: I1008 15:40:47.476829 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fe20fef-fa27-4437-bf80-ea7b72c844dd" path="/var/lib/kubelet/pods/9fe20fef-fa27-4437-bf80-ea7b72c844dd/volumes" Oct 08 15:42:37 crc kubenswrapper[4624]: I1008 15:42:37.665401 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2wtmb"] Oct 08 15:42:37 crc kubenswrapper[4624]: E1008 15:42:37.666564 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe20fef-fa27-4437-bf80-ea7b72c844dd" containerName="extract-content" Oct 08 15:42:37 crc kubenswrapper[4624]: I1008 15:42:37.666583 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe20fef-fa27-4437-bf80-ea7b72c844dd" containerName="extract-content" Oct 08 15:42:37 crc kubenswrapper[4624]: E1008 15:42:37.667427 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe20fef-fa27-4437-bf80-ea7b72c844dd" containerName="registry-server" Oct 08 15:42:37 crc kubenswrapper[4624]: I1008 15:42:37.667448 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe20fef-fa27-4437-bf80-ea7b72c844dd" containerName="registry-server" Oct 08 15:42:37 crc kubenswrapper[4624]: E1008 15:42:37.667474 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe20fef-fa27-4437-bf80-ea7b72c844dd" containerName="extract-utilities" Oct 08 15:42:37 crc kubenswrapper[4624]: I1008 15:42:37.667484 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe20fef-fa27-4437-bf80-ea7b72c844dd" containerName="extract-utilities" Oct 08 15:42:37 crc kubenswrapper[4624]: I1008 15:42:37.667805 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fe20fef-fa27-4437-bf80-ea7b72c844dd" containerName="registry-server" Oct 08 15:42:37 crc kubenswrapper[4624]: I1008 15:42:37.669588 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wtmb" Oct 08 15:42:37 crc kubenswrapper[4624]: I1008 15:42:37.753268 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2wtmb"] Oct 08 15:42:37 crc kubenswrapper[4624]: I1008 15:42:37.765997 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe135080-484a-43b0-8721-3ef0bb31298e-utilities\") pod \"redhat-operators-2wtmb\" (UID: \"fe135080-484a-43b0-8721-3ef0bb31298e\") " pod="openshift-marketplace/redhat-operators-2wtmb" Oct 08 15:42:37 crc kubenswrapper[4624]: I1008 15:42:37.766094 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsnvk\" (UniqueName: \"kubernetes.io/projected/fe135080-484a-43b0-8721-3ef0bb31298e-kube-api-access-rsnvk\") pod \"redhat-operators-2wtmb\" (UID: \"fe135080-484a-43b0-8721-3ef0bb31298e\") " pod="openshift-marketplace/redhat-operators-2wtmb" Oct 08 15:42:37 crc kubenswrapper[4624]: I1008 15:42:37.766120 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe135080-484a-43b0-8721-3ef0bb31298e-catalog-content\") pod \"redhat-operators-2wtmb\" (UID: \"fe135080-484a-43b0-8721-3ef0bb31298e\") " pod="openshift-marketplace/redhat-operators-2wtmb" Oct 08 15:42:37 crc kubenswrapper[4624]: I1008 15:42:37.869053 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe135080-484a-43b0-8721-3ef0bb31298e-utilities\") pod \"redhat-operators-2wtmb\" (UID: \"fe135080-484a-43b0-8721-3ef0bb31298e\") " pod="openshift-marketplace/redhat-operators-2wtmb" Oct 08 15:42:37 crc kubenswrapper[4624]: I1008 15:42:37.869124 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsnvk\" (UniqueName: \"kubernetes.io/projected/fe135080-484a-43b0-8721-3ef0bb31298e-kube-api-access-rsnvk\") pod \"redhat-operators-2wtmb\" (UID: \"fe135080-484a-43b0-8721-3ef0bb31298e\") " pod="openshift-marketplace/redhat-operators-2wtmb" Oct 08 15:42:37 crc kubenswrapper[4624]: I1008 15:42:37.869144 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe135080-484a-43b0-8721-3ef0bb31298e-catalog-content\") pod \"redhat-operators-2wtmb\" (UID: \"fe135080-484a-43b0-8721-3ef0bb31298e\") " pod="openshift-marketplace/redhat-operators-2wtmb" Oct 08 15:42:37 crc kubenswrapper[4624]: I1008 15:42:37.869571 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe135080-484a-43b0-8721-3ef0bb31298e-catalog-content\") pod \"redhat-operators-2wtmb\" (UID: \"fe135080-484a-43b0-8721-3ef0bb31298e\") " pod="openshift-marketplace/redhat-operators-2wtmb" Oct 08 15:42:37 crc kubenswrapper[4624]: I1008 15:42:37.870003 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe135080-484a-43b0-8721-3ef0bb31298e-utilities\") pod \"redhat-operators-2wtmb\" (UID: \"fe135080-484a-43b0-8721-3ef0bb31298e\") " pod="openshift-marketplace/redhat-operators-2wtmb" Oct 08 15:42:37 crc kubenswrapper[4624]: I1008 15:42:37.890574 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsnvk\" (UniqueName: \"kubernetes.io/projected/fe135080-484a-43b0-8721-3ef0bb31298e-kube-api-access-rsnvk\") pod \"redhat-operators-2wtmb\" (UID: \"fe135080-484a-43b0-8721-3ef0bb31298e\") " pod="openshift-marketplace/redhat-operators-2wtmb" Oct 08 15:42:37 crc kubenswrapper[4624]: I1008 15:42:37.993427 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wtmb" Oct 08 15:42:38 crc kubenswrapper[4624]: I1008 15:42:38.613332 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2wtmb"] Oct 08 15:42:39 crc kubenswrapper[4624]: I1008 15:42:39.571177 4624 generic.go:334] "Generic (PLEG): container finished" podID="fe135080-484a-43b0-8721-3ef0bb31298e" containerID="e532a9364e969ca92444900e53ca4db0f6de0ff43f3910d589325abff8e60b0e" exitCode=0 Oct 08 15:42:39 crc kubenswrapper[4624]: I1008 15:42:39.571249 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wtmb" event={"ID":"fe135080-484a-43b0-8721-3ef0bb31298e","Type":"ContainerDied","Data":"e532a9364e969ca92444900e53ca4db0f6de0ff43f3910d589325abff8e60b0e"} Oct 08 15:42:39 crc kubenswrapper[4624]: I1008 15:42:39.571592 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wtmb" event={"ID":"fe135080-484a-43b0-8721-3ef0bb31298e","Type":"ContainerStarted","Data":"d9351a120e08ad0579dfcb9b48fe5285d9d665bf79ffa25f80ae993f532171c5"} Oct 08 15:42:39 crc kubenswrapper[4624]: I1008 15:42:39.573256 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 15:42:40 crc kubenswrapper[4624]: I1008 15:42:40.581930 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wtmb" event={"ID":"fe135080-484a-43b0-8721-3ef0bb31298e","Type":"ContainerStarted","Data":"d5d1a1d94eaee4a03a6c4751e5791aef60f119b105c1df369d68bb4453b2e860"} Oct 08 15:42:45 crc kubenswrapper[4624]: I1008 15:42:45.624284 4624 generic.go:334] "Generic (PLEG): container finished" podID="fe135080-484a-43b0-8721-3ef0bb31298e" containerID="d5d1a1d94eaee4a03a6c4751e5791aef60f119b105c1df369d68bb4453b2e860" exitCode=0 Oct 08 15:42:45 crc kubenswrapper[4624]: I1008 15:42:45.624556 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wtmb" event={"ID":"fe135080-484a-43b0-8721-3ef0bb31298e","Type":"ContainerDied","Data":"d5d1a1d94eaee4a03a6c4751e5791aef60f119b105c1df369d68bb4453b2e860"} Oct 08 15:42:49 crc kubenswrapper[4624]: I1008 15:42:49.667060 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wtmb" event={"ID":"fe135080-484a-43b0-8721-3ef0bb31298e","Type":"ContainerStarted","Data":"7fcab29213f2bcba16727094c041d274028a8b4be309af501376c5af73bb9b7a"} Oct 08 15:42:49 crc kubenswrapper[4624]: I1008 15:42:49.694993 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2wtmb" podStartSLOduration=3.092776866 podStartE2EDuration="12.694973022s" podCreationTimestamp="2025-10-08 15:42:37 +0000 UTC" firstStartedPulling="2025-10-08 15:42:39.573014332 +0000 UTC m=+4784.723949409" lastFinishedPulling="2025-10-08 15:42:49.175210498 +0000 UTC m=+4794.326145565" observedRunningTime="2025-10-08 15:42:49.691763713 +0000 UTC m=+4794.842698790" watchObservedRunningTime="2025-10-08 15:42:49.694973022 +0000 UTC m=+4794.845908099" Oct 08 15:42:57 crc kubenswrapper[4624]: I1008 15:42:57.997825 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2wtmb" Oct 08 15:42:57 crc kubenswrapper[4624]: I1008 15:42:57.998693 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2wtmb" Oct 08 15:42:59 crc kubenswrapper[4624]: I1008 15:42:59.165517 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2wtmb" podUID="fe135080-484a-43b0-8721-3ef0bb31298e" containerName="registry-server" probeResult="failure" output=< Oct 08 15:42:59 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:42:59 crc kubenswrapper[4624]: > Oct 08 15:43:00 crc kubenswrapper[4624]: I1008 15:43:00.076674 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:43:00 crc kubenswrapper[4624]: I1008 15:43:00.077603 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:43:09 crc kubenswrapper[4624]: I1008 15:43:09.062239 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2wtmb" podUID="fe135080-484a-43b0-8721-3ef0bb31298e" containerName="registry-server" probeResult="failure" output=< Oct 08 15:43:09 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:43:09 crc kubenswrapper[4624]: > Oct 08 15:43:18 crc kubenswrapper[4624]: I1008 15:43:18.048683 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2wtmb" Oct 08 15:43:18 crc kubenswrapper[4624]: I1008 15:43:18.117891 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2wtmb" Oct 08 15:43:18 crc kubenswrapper[4624]: I1008 15:43:18.301166 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2wtmb"] Oct 08 15:43:19 crc kubenswrapper[4624]: I1008 15:43:19.957805 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2wtmb" podUID="fe135080-484a-43b0-8721-3ef0bb31298e" containerName="registry-server" containerID="cri-o://7fcab29213f2bcba16727094c041d274028a8b4be309af501376c5af73bb9b7a" gracePeriod=2 Oct 08 15:43:20 crc kubenswrapper[4624]: I1008 15:43:20.719155 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hrpdx"] Oct 08 15:43:20 crc kubenswrapper[4624]: I1008 15:43:20.722444 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hrpdx" Oct 08 15:43:20 crc kubenswrapper[4624]: I1008 15:43:20.733018 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hrpdx"] Oct 08 15:43:20 crc kubenswrapper[4624]: I1008 15:43:20.766106 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2231eebc-87d1-48f4-98be-29d8ab6bf3f6-catalog-content\") pod \"certified-operators-hrpdx\" (UID: \"2231eebc-87d1-48f4-98be-29d8ab6bf3f6\") " pod="openshift-marketplace/certified-operators-hrpdx" Oct 08 15:43:20 crc kubenswrapper[4624]: I1008 15:43:20.766280 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2231eebc-87d1-48f4-98be-29d8ab6bf3f6-utilities\") pod \"certified-operators-hrpdx\" (UID: \"2231eebc-87d1-48f4-98be-29d8ab6bf3f6\") " pod="openshift-marketplace/certified-operators-hrpdx" Oct 08 15:43:20 crc kubenswrapper[4624]: I1008 15:43:20.766390 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5st2p\" (UniqueName: \"kubernetes.io/projected/2231eebc-87d1-48f4-98be-29d8ab6bf3f6-kube-api-access-5st2p\") pod \"certified-operators-hrpdx\" (UID: \"2231eebc-87d1-48f4-98be-29d8ab6bf3f6\") " pod="openshift-marketplace/certified-operators-hrpdx" Oct 08 15:43:20 crc kubenswrapper[4624]: I1008 15:43:20.868867 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2231eebc-87d1-48f4-98be-29d8ab6bf3f6-catalog-content\") pod \"certified-operators-hrpdx\" (UID: \"2231eebc-87d1-48f4-98be-29d8ab6bf3f6\") " pod="openshift-marketplace/certified-operators-hrpdx" Oct 08 15:43:20 crc kubenswrapper[4624]: I1008 15:43:20.869447 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2231eebc-87d1-48f4-98be-29d8ab6bf3f6-utilities\") pod \"certified-operators-hrpdx\" (UID: \"2231eebc-87d1-48f4-98be-29d8ab6bf3f6\") " pod="openshift-marketplace/certified-operators-hrpdx" Oct 08 15:43:20 crc kubenswrapper[4624]: I1008 15:43:20.869664 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5st2p\" (UniqueName: \"kubernetes.io/projected/2231eebc-87d1-48f4-98be-29d8ab6bf3f6-kube-api-access-5st2p\") pod \"certified-operators-hrpdx\" (UID: \"2231eebc-87d1-48f4-98be-29d8ab6bf3f6\") " pod="openshift-marketplace/certified-operators-hrpdx" Oct 08 15:43:20 crc kubenswrapper[4624]: I1008 15:43:20.870792 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2231eebc-87d1-48f4-98be-29d8ab6bf3f6-catalog-content\") pod \"certified-operators-hrpdx\" (UID: \"2231eebc-87d1-48f4-98be-29d8ab6bf3f6\") " pod="openshift-marketplace/certified-operators-hrpdx" Oct 08 15:43:20 crc kubenswrapper[4624]: I1008 15:43:20.871926 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2231eebc-87d1-48f4-98be-29d8ab6bf3f6-utilities\") pod \"certified-operators-hrpdx\" (UID: \"2231eebc-87d1-48f4-98be-29d8ab6bf3f6\") " pod="openshift-marketplace/certified-operators-hrpdx" Oct 08 15:43:20 crc kubenswrapper[4624]: I1008 15:43:20.897347 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5st2p\" (UniqueName: \"kubernetes.io/projected/2231eebc-87d1-48f4-98be-29d8ab6bf3f6-kube-api-access-5st2p\") pod \"certified-operators-hrpdx\" (UID: \"2231eebc-87d1-48f4-98be-29d8ab6bf3f6\") " pod="openshift-marketplace/certified-operators-hrpdx" Oct 08 15:43:20 crc kubenswrapper[4624]: I1008 15:43:20.971964 4624 generic.go:334] "Generic (PLEG): container finished" podID="fe135080-484a-43b0-8721-3ef0bb31298e" containerID="7fcab29213f2bcba16727094c041d274028a8b4be309af501376c5af73bb9b7a" exitCode=0 Oct 08 15:43:20 crc kubenswrapper[4624]: I1008 15:43:20.972016 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wtmb" event={"ID":"fe135080-484a-43b0-8721-3ef0bb31298e","Type":"ContainerDied","Data":"7fcab29213f2bcba16727094c041d274028a8b4be309af501376c5af73bb9b7a"} Oct 08 15:43:21 crc kubenswrapper[4624]: I1008 15:43:21.049659 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hrpdx" Oct 08 15:43:21 crc kubenswrapper[4624]: I1008 15:43:21.796368 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wtmb" Oct 08 15:43:21 crc kubenswrapper[4624]: I1008 15:43:21.879170 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hrpdx"] Oct 08 15:43:21 crc kubenswrapper[4624]: I1008 15:43:21.893294 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe135080-484a-43b0-8721-3ef0bb31298e-catalog-content\") pod \"fe135080-484a-43b0-8721-3ef0bb31298e\" (UID: \"fe135080-484a-43b0-8721-3ef0bb31298e\") " Oct 08 15:43:21 crc kubenswrapper[4624]: I1008 15:43:21.893426 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsnvk\" (UniqueName: \"kubernetes.io/projected/fe135080-484a-43b0-8721-3ef0bb31298e-kube-api-access-rsnvk\") pod \"fe135080-484a-43b0-8721-3ef0bb31298e\" (UID: \"fe135080-484a-43b0-8721-3ef0bb31298e\") " Oct 08 15:43:21 crc kubenswrapper[4624]: I1008 15:43:21.893489 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe135080-484a-43b0-8721-3ef0bb31298e-utilities\") pod \"fe135080-484a-43b0-8721-3ef0bb31298e\" (UID: \"fe135080-484a-43b0-8721-3ef0bb31298e\") " Oct 08 15:43:21 crc kubenswrapper[4624]: I1008 15:43:21.900276 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe135080-484a-43b0-8721-3ef0bb31298e-utilities" (OuterVolumeSpecName: "utilities") pod "fe135080-484a-43b0-8721-3ef0bb31298e" (UID: "fe135080-484a-43b0-8721-3ef0bb31298e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:43:21 crc kubenswrapper[4624]: I1008 15:43:21.907106 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe135080-484a-43b0-8721-3ef0bb31298e-kube-api-access-rsnvk" (OuterVolumeSpecName: "kube-api-access-rsnvk") pod "fe135080-484a-43b0-8721-3ef0bb31298e" (UID: "fe135080-484a-43b0-8721-3ef0bb31298e"). InnerVolumeSpecName "kube-api-access-rsnvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:43:21 crc kubenswrapper[4624]: I1008 15:43:21.996890 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsnvk\" (UniqueName: \"kubernetes.io/projected/fe135080-484a-43b0-8721-3ef0bb31298e-kube-api-access-rsnvk\") on node \"crc\" DevicePath \"\"" Oct 08 15:43:21 crc kubenswrapper[4624]: I1008 15:43:21.996918 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe135080-484a-43b0-8721-3ef0bb31298e-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:43:22 crc kubenswrapper[4624]: I1008 15:43:22.002111 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wtmb" event={"ID":"fe135080-484a-43b0-8721-3ef0bb31298e","Type":"ContainerDied","Data":"d9351a120e08ad0579dfcb9b48fe5285d9d665bf79ffa25f80ae993f532171c5"} Oct 08 15:43:22 crc kubenswrapper[4624]: I1008 15:43:22.002163 4624 scope.go:117] "RemoveContainer" containerID="7fcab29213f2bcba16727094c041d274028a8b4be309af501376c5af73bb9b7a" Oct 08 15:43:22 crc kubenswrapper[4624]: I1008 15:43:22.003107 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wtmb" Oct 08 15:43:22 crc kubenswrapper[4624]: I1008 15:43:22.021152 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrpdx" event={"ID":"2231eebc-87d1-48f4-98be-29d8ab6bf3f6","Type":"ContainerStarted","Data":"35b14799470981b31bff18984a1ff160af7f974f282f57d7b15dfbfd61d3fd55"} Oct 08 15:43:22 crc kubenswrapper[4624]: I1008 15:43:22.057774 4624 scope.go:117] "RemoveContainer" containerID="d5d1a1d94eaee4a03a6c4751e5791aef60f119b105c1df369d68bb4453b2e860" Oct 08 15:43:22 crc kubenswrapper[4624]: I1008 15:43:22.063219 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe135080-484a-43b0-8721-3ef0bb31298e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe135080-484a-43b0-8721-3ef0bb31298e" (UID: "fe135080-484a-43b0-8721-3ef0bb31298e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:43:22 crc kubenswrapper[4624]: I1008 15:43:22.097961 4624 scope.go:117] "RemoveContainer" containerID="e532a9364e969ca92444900e53ca4db0f6de0ff43f3910d589325abff8e60b0e" Oct 08 15:43:22 crc kubenswrapper[4624]: I1008 15:43:22.098196 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe135080-484a-43b0-8721-3ef0bb31298e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:43:22 crc kubenswrapper[4624]: I1008 15:43:22.337081 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2wtmb"] Oct 08 15:43:22 crc kubenswrapper[4624]: I1008 15:43:22.345090 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2wtmb"] Oct 08 15:43:23 crc kubenswrapper[4624]: I1008 15:43:23.033974 4624 generic.go:334] "Generic (PLEG): container finished" podID="2231eebc-87d1-48f4-98be-29d8ab6bf3f6" containerID="cd84d5e262c20b1405ccd5c15c8b29af48b65f98e8073ef2c7275606b25ec6ca" exitCode=0 Oct 08 15:43:23 crc kubenswrapper[4624]: I1008 15:43:23.034107 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrpdx" event={"ID":"2231eebc-87d1-48f4-98be-29d8ab6bf3f6","Type":"ContainerDied","Data":"cd84d5e262c20b1405ccd5c15c8b29af48b65f98e8073ef2c7275606b25ec6ca"} Oct 08 15:43:23 crc kubenswrapper[4624]: I1008 15:43:23.476592 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe135080-484a-43b0-8721-3ef0bb31298e" path="/var/lib/kubelet/pods/fe135080-484a-43b0-8721-3ef0bb31298e/volumes" Oct 08 15:43:24 crc kubenswrapper[4624]: I1008 15:43:24.046578 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrpdx" event={"ID":"2231eebc-87d1-48f4-98be-29d8ab6bf3f6","Type":"ContainerStarted","Data":"6864f4d23384a62c90c6ac07c9559370573f4cc1f82b98217261354e23a1f5b5"} Oct 08 15:43:26 crc kubenswrapper[4624]: I1008 15:43:26.069529 4624 generic.go:334] "Generic (PLEG): container finished" podID="2231eebc-87d1-48f4-98be-29d8ab6bf3f6" containerID="6864f4d23384a62c90c6ac07c9559370573f4cc1f82b98217261354e23a1f5b5" exitCode=0 Oct 08 15:43:26 crc kubenswrapper[4624]: I1008 15:43:26.069599 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrpdx" event={"ID":"2231eebc-87d1-48f4-98be-29d8ab6bf3f6","Type":"ContainerDied","Data":"6864f4d23384a62c90c6ac07c9559370573f4cc1f82b98217261354e23a1f5b5"} Oct 08 15:43:27 crc kubenswrapper[4624]: I1008 15:43:27.080905 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrpdx" event={"ID":"2231eebc-87d1-48f4-98be-29d8ab6bf3f6","Type":"ContainerStarted","Data":"9312d499f51977ef86a88cb367ff464cdc022e4d6a4effb2453a89b9b84b189f"} Oct 08 15:43:27 crc kubenswrapper[4624]: I1008 15:43:27.102059 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hrpdx" podStartSLOduration=3.481420385 podStartE2EDuration="7.102042431s" podCreationTimestamp="2025-10-08 15:43:20 +0000 UTC" firstStartedPulling="2025-10-08 15:43:23.036285434 +0000 UTC m=+4828.187220511" lastFinishedPulling="2025-10-08 15:43:26.65690748 +0000 UTC m=+4831.807842557" observedRunningTime="2025-10-08 15:43:27.100755459 +0000 UTC m=+4832.251690556" watchObservedRunningTime="2025-10-08 15:43:27.102042431 +0000 UTC m=+4832.252977508" Oct 08 15:43:30 crc kubenswrapper[4624]: I1008 15:43:30.077478 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:43:30 crc kubenswrapper[4624]: I1008 15:43:30.077891 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:43:31 crc kubenswrapper[4624]: I1008 15:43:31.051210 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hrpdx" Oct 08 15:43:31 crc kubenswrapper[4624]: I1008 15:43:31.051595 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hrpdx" Oct 08 15:43:31 crc kubenswrapper[4624]: I1008 15:43:31.100626 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hrpdx" Oct 08 15:43:31 crc kubenswrapper[4624]: I1008 15:43:31.294608 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hrpdx" Oct 08 15:43:31 crc kubenswrapper[4624]: I1008 15:43:31.348677 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hrpdx"] Oct 08 15:43:33 crc kubenswrapper[4624]: I1008 15:43:33.264327 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hrpdx" podUID="2231eebc-87d1-48f4-98be-29d8ab6bf3f6" containerName="registry-server" containerID="cri-o://9312d499f51977ef86a88cb367ff464cdc022e4d6a4effb2453a89b9b84b189f" gracePeriod=2 Oct 08 15:43:33 crc kubenswrapper[4624]: I1008 15:43:33.834726 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hrpdx" Oct 08 15:43:33 crc kubenswrapper[4624]: I1008 15:43:33.917739 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2231eebc-87d1-48f4-98be-29d8ab6bf3f6-utilities\") pod \"2231eebc-87d1-48f4-98be-29d8ab6bf3f6\" (UID: \"2231eebc-87d1-48f4-98be-29d8ab6bf3f6\") " Oct 08 15:43:33 crc kubenswrapper[4624]: I1008 15:43:33.918123 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2231eebc-87d1-48f4-98be-29d8ab6bf3f6-catalog-content\") pod \"2231eebc-87d1-48f4-98be-29d8ab6bf3f6\" (UID: \"2231eebc-87d1-48f4-98be-29d8ab6bf3f6\") " Oct 08 15:43:33 crc kubenswrapper[4624]: I1008 15:43:33.918234 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5st2p\" (UniqueName: \"kubernetes.io/projected/2231eebc-87d1-48f4-98be-29d8ab6bf3f6-kube-api-access-5st2p\") pod \"2231eebc-87d1-48f4-98be-29d8ab6bf3f6\" (UID: \"2231eebc-87d1-48f4-98be-29d8ab6bf3f6\") " Oct 08 15:43:33 crc kubenswrapper[4624]: I1008 15:43:33.918522 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2231eebc-87d1-48f4-98be-29d8ab6bf3f6-utilities" (OuterVolumeSpecName: "utilities") pod "2231eebc-87d1-48f4-98be-29d8ab6bf3f6" (UID: "2231eebc-87d1-48f4-98be-29d8ab6bf3f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:43:33 crc kubenswrapper[4624]: I1008 15:43:33.918984 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2231eebc-87d1-48f4-98be-29d8ab6bf3f6-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:43:33 crc kubenswrapper[4624]: I1008 15:43:33.957803 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2231eebc-87d1-48f4-98be-29d8ab6bf3f6-kube-api-access-5st2p" (OuterVolumeSpecName: "kube-api-access-5st2p") pod "2231eebc-87d1-48f4-98be-29d8ab6bf3f6" (UID: "2231eebc-87d1-48f4-98be-29d8ab6bf3f6"). InnerVolumeSpecName "kube-api-access-5st2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:43:33 crc kubenswrapper[4624]: I1008 15:43:33.972702 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2231eebc-87d1-48f4-98be-29d8ab6bf3f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2231eebc-87d1-48f4-98be-29d8ab6bf3f6" (UID: "2231eebc-87d1-48f4-98be-29d8ab6bf3f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:43:34 crc kubenswrapper[4624]: I1008 15:43:34.021318 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2231eebc-87d1-48f4-98be-29d8ab6bf3f6-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:43:34 crc kubenswrapper[4624]: I1008 15:43:34.021371 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5st2p\" (UniqueName: \"kubernetes.io/projected/2231eebc-87d1-48f4-98be-29d8ab6bf3f6-kube-api-access-5st2p\") on node \"crc\" DevicePath \"\"" Oct 08 15:43:34 crc kubenswrapper[4624]: I1008 15:43:34.275044 4624 generic.go:334] "Generic (PLEG): container finished" podID="2231eebc-87d1-48f4-98be-29d8ab6bf3f6" containerID="9312d499f51977ef86a88cb367ff464cdc022e4d6a4effb2453a89b9b84b189f" exitCode=0 Oct 08 15:43:34 crc kubenswrapper[4624]: I1008 15:43:34.275099 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrpdx" event={"ID":"2231eebc-87d1-48f4-98be-29d8ab6bf3f6","Type":"ContainerDied","Data":"9312d499f51977ef86a88cb367ff464cdc022e4d6a4effb2453a89b9b84b189f"} Oct 08 15:43:34 crc kubenswrapper[4624]: I1008 15:43:34.275165 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrpdx" event={"ID":"2231eebc-87d1-48f4-98be-29d8ab6bf3f6","Type":"ContainerDied","Data":"35b14799470981b31bff18984a1ff160af7f974f282f57d7b15dfbfd61d3fd55"} Oct 08 15:43:34 crc kubenswrapper[4624]: I1008 15:43:34.275134 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hrpdx" Oct 08 15:43:34 crc kubenswrapper[4624]: I1008 15:43:34.275198 4624 scope.go:117] "RemoveContainer" containerID="9312d499f51977ef86a88cb367ff464cdc022e4d6a4effb2453a89b9b84b189f" Oct 08 15:43:34 crc kubenswrapper[4624]: I1008 15:43:34.306300 4624 scope.go:117] "RemoveContainer" containerID="6864f4d23384a62c90c6ac07c9559370573f4cc1f82b98217261354e23a1f5b5" Oct 08 15:43:34 crc kubenswrapper[4624]: I1008 15:43:34.313274 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hrpdx"] Oct 08 15:43:34 crc kubenswrapper[4624]: I1008 15:43:34.327116 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hrpdx"] Oct 08 15:43:34 crc kubenswrapper[4624]: I1008 15:43:34.350910 4624 scope.go:117] "RemoveContainer" containerID="cd84d5e262c20b1405ccd5c15c8b29af48b65f98e8073ef2c7275606b25ec6ca" Oct 08 15:43:34 crc kubenswrapper[4624]: I1008 15:43:34.423088 4624 scope.go:117] "RemoveContainer" containerID="9312d499f51977ef86a88cb367ff464cdc022e4d6a4effb2453a89b9b84b189f" Oct 08 15:43:34 crc kubenswrapper[4624]: E1008 15:43:34.424540 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9312d499f51977ef86a88cb367ff464cdc022e4d6a4effb2453a89b9b84b189f\": container with ID starting with 9312d499f51977ef86a88cb367ff464cdc022e4d6a4effb2453a89b9b84b189f not found: ID does not exist" containerID="9312d499f51977ef86a88cb367ff464cdc022e4d6a4effb2453a89b9b84b189f" Oct 08 15:43:34 crc kubenswrapper[4624]: I1008 15:43:34.424582 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9312d499f51977ef86a88cb367ff464cdc022e4d6a4effb2453a89b9b84b189f"} err="failed to get container status \"9312d499f51977ef86a88cb367ff464cdc022e4d6a4effb2453a89b9b84b189f\": rpc error: code = NotFound desc = could not find container \"9312d499f51977ef86a88cb367ff464cdc022e4d6a4effb2453a89b9b84b189f\": container with ID starting with 9312d499f51977ef86a88cb367ff464cdc022e4d6a4effb2453a89b9b84b189f not found: ID does not exist" Oct 08 15:43:34 crc kubenswrapper[4624]: I1008 15:43:34.424611 4624 scope.go:117] "RemoveContainer" containerID="6864f4d23384a62c90c6ac07c9559370573f4cc1f82b98217261354e23a1f5b5" Oct 08 15:43:34 crc kubenswrapper[4624]: E1008 15:43:34.425155 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6864f4d23384a62c90c6ac07c9559370573f4cc1f82b98217261354e23a1f5b5\": container with ID starting with 6864f4d23384a62c90c6ac07c9559370573f4cc1f82b98217261354e23a1f5b5 not found: ID does not exist" containerID="6864f4d23384a62c90c6ac07c9559370573f4cc1f82b98217261354e23a1f5b5" Oct 08 15:43:34 crc kubenswrapper[4624]: I1008 15:43:34.425188 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6864f4d23384a62c90c6ac07c9559370573f4cc1f82b98217261354e23a1f5b5"} err="failed to get container status \"6864f4d23384a62c90c6ac07c9559370573f4cc1f82b98217261354e23a1f5b5\": rpc error: code = NotFound desc = could not find container \"6864f4d23384a62c90c6ac07c9559370573f4cc1f82b98217261354e23a1f5b5\": container with ID starting with 6864f4d23384a62c90c6ac07c9559370573f4cc1f82b98217261354e23a1f5b5 not found: ID does not exist" Oct 08 15:43:34 crc kubenswrapper[4624]: I1008 15:43:34.425206 4624 scope.go:117] "RemoveContainer" containerID="cd84d5e262c20b1405ccd5c15c8b29af48b65f98e8073ef2c7275606b25ec6ca" Oct 08 15:43:34 crc kubenswrapper[4624]: E1008 15:43:34.425547 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd84d5e262c20b1405ccd5c15c8b29af48b65f98e8073ef2c7275606b25ec6ca\": container with ID starting with cd84d5e262c20b1405ccd5c15c8b29af48b65f98e8073ef2c7275606b25ec6ca not found: ID does not exist" containerID="cd84d5e262c20b1405ccd5c15c8b29af48b65f98e8073ef2c7275606b25ec6ca" Oct 08 15:43:34 crc kubenswrapper[4624]: I1008 15:43:34.425571 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd84d5e262c20b1405ccd5c15c8b29af48b65f98e8073ef2c7275606b25ec6ca"} err="failed to get container status \"cd84d5e262c20b1405ccd5c15c8b29af48b65f98e8073ef2c7275606b25ec6ca\": rpc error: code = NotFound desc = could not find container \"cd84d5e262c20b1405ccd5c15c8b29af48b65f98e8073ef2c7275606b25ec6ca\": container with ID starting with cd84d5e262c20b1405ccd5c15c8b29af48b65f98e8073ef2c7275606b25ec6ca not found: ID does not exist" Oct 08 15:43:34 crc kubenswrapper[4624]: E1008 15:43:34.519246 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2231eebc_87d1_48f4_98be_29d8ab6bf3f6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2231eebc_87d1_48f4_98be_29d8ab6bf3f6.slice/crio-35b14799470981b31bff18984a1ff160af7f974f282f57d7b15dfbfd61d3fd55\": RecentStats: unable to find data in memory cache]" Oct 08 15:43:35 crc kubenswrapper[4624]: I1008 15:43:35.477420 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2231eebc-87d1-48f4-98be-29d8ab6bf3f6" path="/var/lib/kubelet/pods/2231eebc-87d1-48f4-98be-29d8ab6bf3f6/volumes" Oct 08 15:44:00 crc kubenswrapper[4624]: I1008 15:44:00.076553 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:44:00 crc kubenswrapper[4624]: I1008 15:44:00.077281 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:44:00 crc kubenswrapper[4624]: I1008 15:44:00.077340 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 15:44:00 crc kubenswrapper[4624]: I1008 15:44:00.078900 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9bd0a8d3164fd7a83510f2ba7ae6c8925f3828524cdd1fe2eacfa02b05ada450"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 15:44:00 crc kubenswrapper[4624]: I1008 15:44:00.078989 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://9bd0a8d3164fd7a83510f2ba7ae6c8925f3828524cdd1fe2eacfa02b05ada450" gracePeriod=600 Oct 08 15:44:00 crc kubenswrapper[4624]: I1008 15:44:00.524070 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="9bd0a8d3164fd7a83510f2ba7ae6c8925f3828524cdd1fe2eacfa02b05ada450" exitCode=0 Oct 08 15:44:00 crc kubenswrapper[4624]: I1008 15:44:00.524182 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"9bd0a8d3164fd7a83510f2ba7ae6c8925f3828524cdd1fe2eacfa02b05ada450"} Oct 08 15:44:00 crc kubenswrapper[4624]: I1008 15:44:00.524370 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f"} Oct 08 15:44:00 crc kubenswrapper[4624]: I1008 15:44:00.524394 4624 scope.go:117] "RemoveContainer" containerID="8309f91b43e7fdec6d220cc149f694557d88065f805528836a0170439494973b" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.238177 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf"] Oct 08 15:45:00 crc kubenswrapper[4624]: E1008 15:45:00.239068 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2231eebc-87d1-48f4-98be-29d8ab6bf3f6" containerName="extract-content" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.239081 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2231eebc-87d1-48f4-98be-29d8ab6bf3f6" containerName="extract-content" Oct 08 15:45:00 crc kubenswrapper[4624]: E1008 15:45:00.239095 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2231eebc-87d1-48f4-98be-29d8ab6bf3f6" containerName="registry-server" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.239101 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2231eebc-87d1-48f4-98be-29d8ab6bf3f6" containerName="registry-server" Oct 08 15:45:00 crc kubenswrapper[4624]: E1008 15:45:00.239114 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe135080-484a-43b0-8721-3ef0bb31298e" containerName="registry-server" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.239120 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe135080-484a-43b0-8721-3ef0bb31298e" containerName="registry-server" Oct 08 15:45:00 crc kubenswrapper[4624]: E1008 15:45:00.239129 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2231eebc-87d1-48f4-98be-29d8ab6bf3f6" containerName="extract-utilities" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.239137 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2231eebc-87d1-48f4-98be-29d8ab6bf3f6" containerName="extract-utilities" Oct 08 15:45:00 crc kubenswrapper[4624]: E1008 15:45:00.239151 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe135080-484a-43b0-8721-3ef0bb31298e" containerName="extract-content" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.239157 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe135080-484a-43b0-8721-3ef0bb31298e" containerName="extract-content" Oct 08 15:45:00 crc kubenswrapper[4624]: E1008 15:45:00.239179 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe135080-484a-43b0-8721-3ef0bb31298e" containerName="extract-utilities" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.239185 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe135080-484a-43b0-8721-3ef0bb31298e" containerName="extract-utilities" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.239394 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe135080-484a-43b0-8721-3ef0bb31298e" containerName="registry-server" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.239417 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="2231eebc-87d1-48f4-98be-29d8ab6bf3f6" containerName="registry-server" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.240066 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.256131 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf"] Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.264187 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.264192 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.360320 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57-config-volume\") pod \"collect-profiles-29332305-b22lf\" (UID: \"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.360826 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfbsl\" (UniqueName: \"kubernetes.io/projected/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57-kube-api-access-kfbsl\") pod \"collect-profiles-29332305-b22lf\" (UID: \"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.361059 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57-secret-volume\") pod \"collect-profiles-29332305-b22lf\" (UID: \"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.462542 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57-config-volume\") pod \"collect-profiles-29332305-b22lf\" (UID: \"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.462594 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfbsl\" (UniqueName: \"kubernetes.io/projected/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57-kube-api-access-kfbsl\") pod \"collect-profiles-29332305-b22lf\" (UID: \"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.462670 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57-secret-volume\") pod \"collect-profiles-29332305-b22lf\" (UID: \"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.464134 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57-config-volume\") pod \"collect-profiles-29332305-b22lf\" (UID: \"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.472672 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57-secret-volume\") pod \"collect-profiles-29332305-b22lf\" (UID: \"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.488862 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfbsl\" (UniqueName: \"kubernetes.io/projected/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57-kube-api-access-kfbsl\") pod \"collect-profiles-29332305-b22lf\" (UID: \"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf" Oct 08 15:45:00 crc kubenswrapper[4624]: I1008 15:45:00.571431 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf" Oct 08 15:45:01 crc kubenswrapper[4624]: I1008 15:45:01.093244 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf"] Oct 08 15:45:02 crc kubenswrapper[4624]: I1008 15:45:02.088358 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf" event={"ID":"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57","Type":"ContainerStarted","Data":"fc861269816710172b0e784fd14e3cc9c4f869fbab5aa0c029f0f5a42e7224d0"} Oct 08 15:45:02 crc kubenswrapper[4624]: I1008 15:45:02.089987 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf" event={"ID":"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57","Type":"ContainerStarted","Data":"0c8a11ca86f7957b1b3fc0a732e9c947ea56e3f81531f06f2ac1cb4a091fdff1"} Oct 08 15:45:02 crc kubenswrapper[4624]: I1008 15:45:02.129537 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf" podStartSLOduration=2.129510693 podStartE2EDuration="2.129510693s" podCreationTimestamp="2025-10-08 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 15:45:02.109604008 +0000 UTC m=+4927.260539105" watchObservedRunningTime="2025-10-08 15:45:02.129510693 +0000 UTC m=+4927.280445770" Oct 08 15:45:03 crc kubenswrapper[4624]: I1008 15:45:03.098477 4624 generic.go:334] "Generic (PLEG): container finished" podID="4a4d4483-9136-4f2d-8f08-c1c1fd36fe57" containerID="fc861269816710172b0e784fd14e3cc9c4f869fbab5aa0c029f0f5a42e7224d0" exitCode=0 Oct 08 15:45:03 crc kubenswrapper[4624]: I1008 15:45:03.098524 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf" event={"ID":"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57","Type":"ContainerDied","Data":"fc861269816710172b0e784fd14e3cc9c4f869fbab5aa0c029f0f5a42e7224d0"} Oct 08 15:45:04 crc kubenswrapper[4624]: I1008 15:45:04.520770 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf" Oct 08 15:45:04 crc kubenswrapper[4624]: I1008 15:45:04.652078 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57-secret-volume\") pod \"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57\" (UID: \"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57\") " Oct 08 15:45:04 crc kubenswrapper[4624]: I1008 15:45:04.652194 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57-config-volume\") pod \"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57\" (UID: \"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57\") " Oct 08 15:45:04 crc kubenswrapper[4624]: I1008 15:45:04.652361 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfbsl\" (UniqueName: \"kubernetes.io/projected/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57-kube-api-access-kfbsl\") pod \"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57\" (UID: \"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57\") " Oct 08 15:45:04 crc kubenswrapper[4624]: I1008 15:45:04.653088 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57-config-volume" (OuterVolumeSpecName: "config-volume") pod "4a4d4483-9136-4f2d-8f08-c1c1fd36fe57" (UID: "4a4d4483-9136-4f2d-8f08-c1c1fd36fe57"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 15:45:04 crc kubenswrapper[4624]: I1008 15:45:04.659104 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57-kube-api-access-kfbsl" (OuterVolumeSpecName: "kube-api-access-kfbsl") pod "4a4d4483-9136-4f2d-8f08-c1c1fd36fe57" (UID: "4a4d4483-9136-4f2d-8f08-c1c1fd36fe57"). InnerVolumeSpecName "kube-api-access-kfbsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:45:04 crc kubenswrapper[4624]: I1008 15:45:04.661219 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4a4d4483-9136-4f2d-8f08-c1c1fd36fe57" (UID: "4a4d4483-9136-4f2d-8f08-c1c1fd36fe57"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 15:45:04 crc kubenswrapper[4624]: I1008 15:45:04.756233 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 15:45:04 crc kubenswrapper[4624]: I1008 15:45:04.756268 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 15:45:04 crc kubenswrapper[4624]: I1008 15:45:04.756277 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfbsl\" (UniqueName: \"kubernetes.io/projected/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57-kube-api-access-kfbsl\") on node \"crc\" DevicePath \"\"" Oct 08 15:45:05 crc kubenswrapper[4624]: I1008 15:45:05.118374 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf" event={"ID":"4a4d4483-9136-4f2d-8f08-c1c1fd36fe57","Type":"ContainerDied","Data":"0c8a11ca86f7957b1b3fc0a732e9c947ea56e3f81531f06f2ac1cb4a091fdff1"} Oct 08 15:45:05 crc kubenswrapper[4624]: I1008 15:45:05.118420 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf" Oct 08 15:45:05 crc kubenswrapper[4624]: I1008 15:45:05.118418 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c8a11ca86f7957b1b3fc0a732e9c947ea56e3f81531f06f2ac1cb4a091fdff1" Oct 08 15:45:05 crc kubenswrapper[4624]: I1008 15:45:05.234662 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv"] Oct 08 15:45:05 crc kubenswrapper[4624]: I1008 15:45:05.243964 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332260-8x5jv"] Oct 08 15:45:05 crc kubenswrapper[4624]: I1008 15:45:05.480423 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="384759c6-b8ef-441a-92c4-3cbedbd9359e" path="/var/lib/kubelet/pods/384759c6-b8ef-441a-92c4-3cbedbd9359e/volumes" Oct 08 15:45:44 crc kubenswrapper[4624]: I1008 15:45:44.845293 4624 scope.go:117] "RemoveContainer" containerID="710926f54f87ae9807ed3e3be3d4a3687a70de3185737acbb7da4fec7d85d214" Oct 08 15:46:00 crc kubenswrapper[4624]: I1008 15:46:00.076822 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:46:00 crc kubenswrapper[4624]: I1008 15:46:00.077396 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:46:30 crc kubenswrapper[4624]: I1008 15:46:30.076210 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:46:30 crc kubenswrapper[4624]: I1008 15:46:30.076857 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:47:00 crc kubenswrapper[4624]: I1008 15:47:00.076454 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:47:00 crc kubenswrapper[4624]: I1008 15:47:00.077107 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:47:00 crc kubenswrapper[4624]: I1008 15:47:00.077156 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 15:47:00 crc kubenswrapper[4624]: I1008 15:47:00.077957 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 15:47:00 crc kubenswrapper[4624]: I1008 15:47:00.078024 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" gracePeriod=600 Oct 08 15:47:00 crc kubenswrapper[4624]: E1008 15:47:00.213380 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:47:01 crc kubenswrapper[4624]: I1008 15:47:01.190182 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" exitCode=0 Oct 08 15:47:01 crc kubenswrapper[4624]: I1008 15:47:01.190225 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f"} Oct 08 15:47:01 crc kubenswrapper[4624]: I1008 15:47:01.190573 4624 scope.go:117] "RemoveContainer" containerID="9bd0a8d3164fd7a83510f2ba7ae6c8925f3828524cdd1fe2eacfa02b05ada450" Oct 08 15:47:01 crc kubenswrapper[4624]: I1008 15:47:01.191477 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:47:01 crc kubenswrapper[4624]: E1008 15:47:01.191769 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:47:14 crc kubenswrapper[4624]: I1008 15:47:14.465697 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:47:14 crc kubenswrapper[4624]: E1008 15:47:14.466450 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:47:26 crc kubenswrapper[4624]: I1008 15:47:26.466277 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:47:26 crc kubenswrapper[4624]: E1008 15:47:26.467154 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:47:28 crc kubenswrapper[4624]: I1008 15:47:28.830686 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7wq5j"] Oct 08 15:47:28 crc kubenswrapper[4624]: E1008 15:47:28.831500 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a4d4483-9136-4f2d-8f08-c1c1fd36fe57" containerName="collect-profiles" Oct 08 15:47:28 crc kubenswrapper[4624]: I1008 15:47:28.831520 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a4d4483-9136-4f2d-8f08-c1c1fd36fe57" containerName="collect-profiles" Oct 08 15:47:28 crc kubenswrapper[4624]: I1008 15:47:28.831815 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a4d4483-9136-4f2d-8f08-c1c1fd36fe57" containerName="collect-profiles" Oct 08 15:47:28 crc kubenswrapper[4624]: I1008 15:47:28.835932 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7wq5j" Oct 08 15:47:28 crc kubenswrapper[4624]: I1008 15:47:28.849688 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7wq5j"] Oct 08 15:47:28 crc kubenswrapper[4624]: I1008 15:47:28.937766 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfll9\" (UniqueName: \"kubernetes.io/projected/68d49159-657b-4925-96ca-cda49e84e919-kube-api-access-xfll9\") pod \"redhat-marketplace-7wq5j\" (UID: \"68d49159-657b-4925-96ca-cda49e84e919\") " pod="openshift-marketplace/redhat-marketplace-7wq5j" Oct 08 15:47:28 crc kubenswrapper[4624]: I1008 15:47:28.938175 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68d49159-657b-4925-96ca-cda49e84e919-utilities\") pod \"redhat-marketplace-7wq5j\" (UID: \"68d49159-657b-4925-96ca-cda49e84e919\") " pod="openshift-marketplace/redhat-marketplace-7wq5j" Oct 08 15:47:28 crc kubenswrapper[4624]: I1008 15:47:28.938562 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68d49159-657b-4925-96ca-cda49e84e919-catalog-content\") pod \"redhat-marketplace-7wq5j\" (UID: \"68d49159-657b-4925-96ca-cda49e84e919\") " pod="openshift-marketplace/redhat-marketplace-7wq5j" Oct 08 15:47:29 crc kubenswrapper[4624]: I1008 15:47:29.040003 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68d49159-657b-4925-96ca-cda49e84e919-utilities\") pod \"redhat-marketplace-7wq5j\" (UID: \"68d49159-657b-4925-96ca-cda49e84e919\") " pod="openshift-marketplace/redhat-marketplace-7wq5j" Oct 08 15:47:29 crc kubenswrapper[4624]: I1008 15:47:29.040125 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68d49159-657b-4925-96ca-cda49e84e919-catalog-content\") pod \"redhat-marketplace-7wq5j\" (UID: \"68d49159-657b-4925-96ca-cda49e84e919\") " pod="openshift-marketplace/redhat-marketplace-7wq5j" Oct 08 15:47:29 crc kubenswrapper[4624]: I1008 15:47:29.040253 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfll9\" (UniqueName: \"kubernetes.io/projected/68d49159-657b-4925-96ca-cda49e84e919-kube-api-access-xfll9\") pod \"redhat-marketplace-7wq5j\" (UID: \"68d49159-657b-4925-96ca-cda49e84e919\") " pod="openshift-marketplace/redhat-marketplace-7wq5j" Oct 08 15:47:29 crc kubenswrapper[4624]: I1008 15:47:29.040672 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68d49159-657b-4925-96ca-cda49e84e919-utilities\") pod \"redhat-marketplace-7wq5j\" (UID: \"68d49159-657b-4925-96ca-cda49e84e919\") " pod="openshift-marketplace/redhat-marketplace-7wq5j" Oct 08 15:47:29 crc kubenswrapper[4624]: I1008 15:47:29.040698 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68d49159-657b-4925-96ca-cda49e84e919-catalog-content\") pod \"redhat-marketplace-7wq5j\" (UID: \"68d49159-657b-4925-96ca-cda49e84e919\") " pod="openshift-marketplace/redhat-marketplace-7wq5j" Oct 08 15:47:29 crc kubenswrapper[4624]: I1008 15:47:29.062391 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfll9\" (UniqueName: \"kubernetes.io/projected/68d49159-657b-4925-96ca-cda49e84e919-kube-api-access-xfll9\") pod \"redhat-marketplace-7wq5j\" (UID: \"68d49159-657b-4925-96ca-cda49e84e919\") " pod="openshift-marketplace/redhat-marketplace-7wq5j" Oct 08 15:47:29 crc kubenswrapper[4624]: I1008 15:47:29.158169 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7wq5j" Oct 08 15:47:29 crc kubenswrapper[4624]: I1008 15:47:29.662320 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7wq5j"] Oct 08 15:47:30 crc kubenswrapper[4624]: I1008 15:47:30.446858 4624 generic.go:334] "Generic (PLEG): container finished" podID="68d49159-657b-4925-96ca-cda49e84e919" containerID="7416bc446f4edd30216066baa3834386f3460d1f8ed861f0825bcb78b3ca61f4" exitCode=0 Oct 08 15:47:30 crc kubenswrapper[4624]: I1008 15:47:30.447131 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7wq5j" event={"ID":"68d49159-657b-4925-96ca-cda49e84e919","Type":"ContainerDied","Data":"7416bc446f4edd30216066baa3834386f3460d1f8ed861f0825bcb78b3ca61f4"} Oct 08 15:47:30 crc kubenswrapper[4624]: I1008 15:47:30.447159 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7wq5j" event={"ID":"68d49159-657b-4925-96ca-cda49e84e919","Type":"ContainerStarted","Data":"c6546bbfb51cbb33f05825b469dd53f37bc9cf97a629080c0b57e80747e1284f"} Oct 08 15:47:32 crc kubenswrapper[4624]: I1008 15:47:32.465617 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7wq5j" event={"ID":"68d49159-657b-4925-96ca-cda49e84e919","Type":"ContainerStarted","Data":"087784c3f0edb90021a248774a949687f2bff0a90ad6284ca629faa7dc09dcf0"} Oct 08 15:47:33 crc kubenswrapper[4624]: I1008 15:47:33.482676 4624 generic.go:334] "Generic (PLEG): container finished" podID="68d49159-657b-4925-96ca-cda49e84e919" containerID="087784c3f0edb90021a248774a949687f2bff0a90ad6284ca629faa7dc09dcf0" exitCode=0 Oct 08 15:47:33 crc kubenswrapper[4624]: I1008 15:47:33.482759 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7wq5j" event={"ID":"68d49159-657b-4925-96ca-cda49e84e919","Type":"ContainerDied","Data":"087784c3f0edb90021a248774a949687f2bff0a90ad6284ca629faa7dc09dcf0"} Oct 08 15:47:36 crc kubenswrapper[4624]: I1008 15:47:36.512452 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7wq5j" event={"ID":"68d49159-657b-4925-96ca-cda49e84e919","Type":"ContainerStarted","Data":"da6203cfb6bcef3b0b739a2ecb64637f3db9309fde3d0b9c068465afa59247a4"} Oct 08 15:47:36 crc kubenswrapper[4624]: I1008 15:47:36.534158 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7wq5j" podStartSLOduration=4.250436565 podStartE2EDuration="8.534140558s" podCreationTimestamp="2025-10-08 15:47:28 +0000 UTC" firstStartedPulling="2025-10-08 15:47:30.44937179 +0000 UTC m=+5075.600306867" lastFinishedPulling="2025-10-08 15:47:34.733075773 +0000 UTC m=+5079.884010860" observedRunningTime="2025-10-08 15:47:36.527372545 +0000 UTC m=+5081.678307622" watchObservedRunningTime="2025-10-08 15:47:36.534140558 +0000 UTC m=+5081.685075635" Oct 08 15:47:39 crc kubenswrapper[4624]: I1008 15:47:39.159011 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7wq5j" Oct 08 15:47:39 crc kubenswrapper[4624]: I1008 15:47:39.159988 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7wq5j" Oct 08 15:47:39 crc kubenswrapper[4624]: I1008 15:47:39.211349 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7wq5j" Oct 08 15:47:40 crc kubenswrapper[4624]: I1008 15:47:40.600683 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7wq5j" Oct 08 15:47:40 crc kubenswrapper[4624]: I1008 15:47:40.654674 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7wq5j"] Oct 08 15:47:41 crc kubenswrapper[4624]: I1008 15:47:41.466761 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:47:41 crc kubenswrapper[4624]: E1008 15:47:41.467179 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:47:42 crc kubenswrapper[4624]: I1008 15:47:42.566545 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7wq5j" podUID="68d49159-657b-4925-96ca-cda49e84e919" containerName="registry-server" containerID="cri-o://da6203cfb6bcef3b0b739a2ecb64637f3db9309fde3d0b9c068465afa59247a4" gracePeriod=2 Oct 08 15:47:43 crc kubenswrapper[4624]: I1008 15:47:43.345773 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7wq5j" Oct 08 15:47:43 crc kubenswrapper[4624]: I1008 15:47:43.432174 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfll9\" (UniqueName: \"kubernetes.io/projected/68d49159-657b-4925-96ca-cda49e84e919-kube-api-access-xfll9\") pod \"68d49159-657b-4925-96ca-cda49e84e919\" (UID: \"68d49159-657b-4925-96ca-cda49e84e919\") " Oct 08 15:47:43 crc kubenswrapper[4624]: I1008 15:47:43.432610 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68d49159-657b-4925-96ca-cda49e84e919-catalog-content\") pod \"68d49159-657b-4925-96ca-cda49e84e919\" (UID: \"68d49159-657b-4925-96ca-cda49e84e919\") " Oct 08 15:47:43 crc kubenswrapper[4624]: I1008 15:47:43.432840 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68d49159-657b-4925-96ca-cda49e84e919-utilities\") pod \"68d49159-657b-4925-96ca-cda49e84e919\" (UID: \"68d49159-657b-4925-96ca-cda49e84e919\") " Oct 08 15:47:43 crc kubenswrapper[4624]: I1008 15:47:43.434133 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68d49159-657b-4925-96ca-cda49e84e919-utilities" (OuterVolumeSpecName: "utilities") pod "68d49159-657b-4925-96ca-cda49e84e919" (UID: "68d49159-657b-4925-96ca-cda49e84e919"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:47:43 crc kubenswrapper[4624]: I1008 15:47:43.447099 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68d49159-657b-4925-96ca-cda49e84e919-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68d49159-657b-4925-96ca-cda49e84e919" (UID: "68d49159-657b-4925-96ca-cda49e84e919"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:47:43 crc kubenswrapper[4624]: I1008 15:47:43.535806 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68d49159-657b-4925-96ca-cda49e84e919-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:47:43 crc kubenswrapper[4624]: I1008 15:47:43.535847 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68d49159-657b-4925-96ca-cda49e84e919-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:47:43 crc kubenswrapper[4624]: I1008 15:47:43.578432 4624 generic.go:334] "Generic (PLEG): container finished" podID="68d49159-657b-4925-96ca-cda49e84e919" containerID="da6203cfb6bcef3b0b739a2ecb64637f3db9309fde3d0b9c068465afa59247a4" exitCode=0 Oct 08 15:47:43 crc kubenswrapper[4624]: I1008 15:47:43.578508 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7wq5j" event={"ID":"68d49159-657b-4925-96ca-cda49e84e919","Type":"ContainerDied","Data":"da6203cfb6bcef3b0b739a2ecb64637f3db9309fde3d0b9c068465afa59247a4"} Oct 08 15:47:43 crc kubenswrapper[4624]: I1008 15:47:43.578547 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7wq5j" Oct 08 15:47:43 crc kubenswrapper[4624]: I1008 15:47:43.578571 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7wq5j" event={"ID":"68d49159-657b-4925-96ca-cda49e84e919","Type":"ContainerDied","Data":"c6546bbfb51cbb33f05825b469dd53f37bc9cf97a629080c0b57e80747e1284f"} Oct 08 15:47:43 crc kubenswrapper[4624]: I1008 15:47:43.578592 4624 scope.go:117] "RemoveContainer" containerID="da6203cfb6bcef3b0b739a2ecb64637f3db9309fde3d0b9c068465afa59247a4" Oct 08 15:47:43 crc kubenswrapper[4624]: I1008 15:47:43.602943 4624 scope.go:117] "RemoveContainer" containerID="087784c3f0edb90021a248774a949687f2bff0a90ad6284ca629faa7dc09dcf0" Oct 08 15:47:43 crc kubenswrapper[4624]: I1008 15:47:43.925419 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68d49159-657b-4925-96ca-cda49e84e919-kube-api-access-xfll9" (OuterVolumeSpecName: "kube-api-access-xfll9") pod "68d49159-657b-4925-96ca-cda49e84e919" (UID: "68d49159-657b-4925-96ca-cda49e84e919"). InnerVolumeSpecName "kube-api-access-xfll9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:47:43 crc kubenswrapper[4624]: I1008 15:47:43.944198 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfll9\" (UniqueName: \"kubernetes.io/projected/68d49159-657b-4925-96ca-cda49e84e919-kube-api-access-xfll9\") on node \"crc\" DevicePath \"\"" Oct 08 15:47:43 crc kubenswrapper[4624]: I1008 15:47:43.968742 4624 scope.go:117] "RemoveContainer" containerID="7416bc446f4edd30216066baa3834386f3460d1f8ed861f0825bcb78b3ca61f4" Oct 08 15:47:44 crc kubenswrapper[4624]: I1008 15:47:44.075757 4624 scope.go:117] "RemoveContainer" containerID="da6203cfb6bcef3b0b739a2ecb64637f3db9309fde3d0b9c068465afa59247a4" Oct 08 15:47:44 crc kubenswrapper[4624]: E1008 15:47:44.076308 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da6203cfb6bcef3b0b739a2ecb64637f3db9309fde3d0b9c068465afa59247a4\": container with ID starting with da6203cfb6bcef3b0b739a2ecb64637f3db9309fde3d0b9c068465afa59247a4 not found: ID does not exist" containerID="da6203cfb6bcef3b0b739a2ecb64637f3db9309fde3d0b9c068465afa59247a4" Oct 08 15:47:44 crc kubenswrapper[4624]: I1008 15:47:44.076361 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da6203cfb6bcef3b0b739a2ecb64637f3db9309fde3d0b9c068465afa59247a4"} err="failed to get container status \"da6203cfb6bcef3b0b739a2ecb64637f3db9309fde3d0b9c068465afa59247a4\": rpc error: code = NotFound desc = could not find container \"da6203cfb6bcef3b0b739a2ecb64637f3db9309fde3d0b9c068465afa59247a4\": container with ID starting with da6203cfb6bcef3b0b739a2ecb64637f3db9309fde3d0b9c068465afa59247a4 not found: ID does not exist" Oct 08 15:47:44 crc kubenswrapper[4624]: I1008 15:47:44.076396 4624 scope.go:117] "RemoveContainer" containerID="087784c3f0edb90021a248774a949687f2bff0a90ad6284ca629faa7dc09dcf0" Oct 08 15:47:44 crc kubenswrapper[4624]: E1008 15:47:44.076913 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"087784c3f0edb90021a248774a949687f2bff0a90ad6284ca629faa7dc09dcf0\": container with ID starting with 087784c3f0edb90021a248774a949687f2bff0a90ad6284ca629faa7dc09dcf0 not found: ID does not exist" containerID="087784c3f0edb90021a248774a949687f2bff0a90ad6284ca629faa7dc09dcf0" Oct 08 15:47:44 crc kubenswrapper[4624]: I1008 15:47:44.077022 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"087784c3f0edb90021a248774a949687f2bff0a90ad6284ca629faa7dc09dcf0"} err="failed to get container status \"087784c3f0edb90021a248774a949687f2bff0a90ad6284ca629faa7dc09dcf0\": rpc error: code = NotFound desc = could not find container \"087784c3f0edb90021a248774a949687f2bff0a90ad6284ca629faa7dc09dcf0\": container with ID starting with 087784c3f0edb90021a248774a949687f2bff0a90ad6284ca629faa7dc09dcf0 not found: ID does not exist" Oct 08 15:47:44 crc kubenswrapper[4624]: I1008 15:47:44.077084 4624 scope.go:117] "RemoveContainer" containerID="7416bc446f4edd30216066baa3834386f3460d1f8ed861f0825bcb78b3ca61f4" Oct 08 15:47:44 crc kubenswrapper[4624]: E1008 15:47:44.078672 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7416bc446f4edd30216066baa3834386f3460d1f8ed861f0825bcb78b3ca61f4\": container with ID starting with 7416bc446f4edd30216066baa3834386f3460d1f8ed861f0825bcb78b3ca61f4 not found: ID does not exist" containerID="7416bc446f4edd30216066baa3834386f3460d1f8ed861f0825bcb78b3ca61f4" Oct 08 15:47:44 crc kubenswrapper[4624]: I1008 15:47:44.078704 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7416bc446f4edd30216066baa3834386f3460d1f8ed861f0825bcb78b3ca61f4"} err="failed to get container status \"7416bc446f4edd30216066baa3834386f3460d1f8ed861f0825bcb78b3ca61f4\": rpc error: code = NotFound desc = could not find container \"7416bc446f4edd30216066baa3834386f3460d1f8ed861f0825bcb78b3ca61f4\": container with ID starting with 7416bc446f4edd30216066baa3834386f3460d1f8ed861f0825bcb78b3ca61f4 not found: ID does not exist" Oct 08 15:47:44 crc kubenswrapper[4624]: I1008 15:47:44.238863 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7wq5j"] Oct 08 15:47:44 crc kubenswrapper[4624]: I1008 15:47:44.252554 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7wq5j"] Oct 08 15:47:45 crc kubenswrapper[4624]: I1008 15:47:45.484086 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68d49159-657b-4925-96ca-cda49e84e919" path="/var/lib/kubelet/pods/68d49159-657b-4925-96ca-cda49e84e919/volumes" Oct 08 15:47:56 crc kubenswrapper[4624]: I1008 15:47:56.465698 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:47:56 crc kubenswrapper[4624]: E1008 15:47:56.466680 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:48:07 crc kubenswrapper[4624]: I1008 15:48:07.466453 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:48:07 crc kubenswrapper[4624]: E1008 15:48:07.467466 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:48:18 crc kubenswrapper[4624]: I1008 15:48:18.466672 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:48:18 crc kubenswrapper[4624]: E1008 15:48:18.468856 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:48:29 crc kubenswrapper[4624]: I1008 15:48:29.465972 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:48:29 crc kubenswrapper[4624]: E1008 15:48:29.466985 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:48:44 crc kubenswrapper[4624]: I1008 15:48:44.465966 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:48:44 crc kubenswrapper[4624]: E1008 15:48:44.467840 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:48:57 crc kubenswrapper[4624]: I1008 15:48:57.465534 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:48:57 crc kubenswrapper[4624]: E1008 15:48:57.466454 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:49:09 crc kubenswrapper[4624]: I1008 15:49:09.465975 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:49:09 crc kubenswrapper[4624]: E1008 15:49:09.467693 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:49:23 crc kubenswrapper[4624]: I1008 15:49:23.469114 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:49:23 crc kubenswrapper[4624]: E1008 15:49:23.470261 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:49:37 crc kubenswrapper[4624]: I1008 15:49:37.465654 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:49:37 crc kubenswrapper[4624]: E1008 15:49:37.466301 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:49:48 crc kubenswrapper[4624]: I1008 15:49:48.465211 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:49:48 crc kubenswrapper[4624]: E1008 15:49:48.466317 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:50:02 crc kubenswrapper[4624]: I1008 15:50:02.466063 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:50:02 crc kubenswrapper[4624]: E1008 15:50:02.466878 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:50:14 crc kubenswrapper[4624]: I1008 15:50:14.466394 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:50:14 crc kubenswrapper[4624]: E1008 15:50:14.467199 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:50:26 crc kubenswrapper[4624]: I1008 15:50:26.467037 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:50:26 crc kubenswrapper[4624]: E1008 15:50:26.467824 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:50:41 crc kubenswrapper[4624]: I1008 15:50:41.466114 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:50:41 crc kubenswrapper[4624]: E1008 15:50:41.467082 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:50:54 crc kubenswrapper[4624]: I1008 15:50:54.466420 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:50:54 crc kubenswrapper[4624]: E1008 15:50:54.467490 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:51:07 crc kubenswrapper[4624]: I1008 15:51:07.465737 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:51:07 crc kubenswrapper[4624]: E1008 15:51:07.467222 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:51:22 crc kubenswrapper[4624]: I1008 15:51:22.466216 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:51:22 crc kubenswrapper[4624]: E1008 15:51:22.466971 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:51:37 crc kubenswrapper[4624]: I1008 15:51:37.466202 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:51:37 crc kubenswrapper[4624]: E1008 15:51:37.467033 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:51:37 crc kubenswrapper[4624]: I1008 15:51:37.608769 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wxb9n"] Oct 08 15:51:37 crc kubenswrapper[4624]: E1008 15:51:37.609520 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d49159-657b-4925-96ca-cda49e84e919" containerName="extract-content" Oct 08 15:51:37 crc kubenswrapper[4624]: I1008 15:51:37.609540 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d49159-657b-4925-96ca-cda49e84e919" containerName="extract-content" Oct 08 15:51:37 crc kubenswrapper[4624]: E1008 15:51:37.609562 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d49159-657b-4925-96ca-cda49e84e919" containerName="extract-utilities" Oct 08 15:51:37 crc kubenswrapper[4624]: I1008 15:51:37.609571 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d49159-657b-4925-96ca-cda49e84e919" containerName="extract-utilities" Oct 08 15:51:37 crc kubenswrapper[4624]: E1008 15:51:37.609591 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d49159-657b-4925-96ca-cda49e84e919" containerName="registry-server" Oct 08 15:51:37 crc kubenswrapper[4624]: I1008 15:51:37.609598 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d49159-657b-4925-96ca-cda49e84e919" containerName="registry-server" Oct 08 15:51:37 crc kubenswrapper[4624]: I1008 15:51:37.609928 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="68d49159-657b-4925-96ca-cda49e84e919" containerName="registry-server" Oct 08 15:51:37 crc kubenswrapper[4624]: I1008 15:51:37.612165 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxb9n" Oct 08 15:51:37 crc kubenswrapper[4624]: I1008 15:51:37.630736 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxb9n"] Oct 08 15:51:37 crc kubenswrapper[4624]: I1008 15:51:37.715033 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad-catalog-content\") pod \"community-operators-wxb9n\" (UID: \"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad\") " pod="openshift-marketplace/community-operators-wxb9n" Oct 08 15:51:37 crc kubenswrapper[4624]: I1008 15:51:37.715129 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad-utilities\") pod \"community-operators-wxb9n\" (UID: \"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad\") " pod="openshift-marketplace/community-operators-wxb9n" Oct 08 15:51:37 crc kubenswrapper[4624]: I1008 15:51:37.715173 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf2jq\" (UniqueName: \"kubernetes.io/projected/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad-kube-api-access-zf2jq\") pod \"community-operators-wxb9n\" (UID: \"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad\") " pod="openshift-marketplace/community-operators-wxb9n" Oct 08 15:51:37 crc kubenswrapper[4624]: I1008 15:51:37.817370 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad-catalog-content\") pod \"community-operators-wxb9n\" (UID: \"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad\") " pod="openshift-marketplace/community-operators-wxb9n" Oct 08 15:51:37 crc kubenswrapper[4624]: I1008 15:51:37.817464 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad-utilities\") pod \"community-operators-wxb9n\" (UID: \"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad\") " pod="openshift-marketplace/community-operators-wxb9n" Oct 08 15:51:37 crc kubenswrapper[4624]: I1008 15:51:37.817508 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf2jq\" (UniqueName: \"kubernetes.io/projected/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad-kube-api-access-zf2jq\") pod \"community-operators-wxb9n\" (UID: \"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad\") " pod="openshift-marketplace/community-operators-wxb9n" Oct 08 15:51:37 crc kubenswrapper[4624]: I1008 15:51:37.818286 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad-catalog-content\") pod \"community-operators-wxb9n\" (UID: \"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad\") " pod="openshift-marketplace/community-operators-wxb9n" Oct 08 15:51:37 crc kubenswrapper[4624]: I1008 15:51:37.818331 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad-utilities\") pod \"community-operators-wxb9n\" (UID: \"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad\") " pod="openshift-marketplace/community-operators-wxb9n" Oct 08 15:51:37 crc kubenswrapper[4624]: I1008 15:51:37.843511 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf2jq\" (UniqueName: \"kubernetes.io/projected/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad-kube-api-access-zf2jq\") pod \"community-operators-wxb9n\" (UID: \"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad\") " pod="openshift-marketplace/community-operators-wxb9n" Oct 08 15:51:37 crc kubenswrapper[4624]: I1008 15:51:37.932426 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxb9n" Oct 08 15:51:38 crc kubenswrapper[4624]: I1008 15:51:38.618145 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxb9n"] Oct 08 15:51:38 crc kubenswrapper[4624]: I1008 15:51:38.724949 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb9n" event={"ID":"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad","Type":"ContainerStarted","Data":"7ef1c091eb1a2271ccf9de1cf77d6356c2b9f73732745d30987348dafdd5f1a3"} Oct 08 15:51:39 crc kubenswrapper[4624]: I1008 15:51:39.738359 4624 generic.go:334] "Generic (PLEG): container finished" podID="7ecc6ad1-a516-4c05-a563-3b1a91cba9ad" containerID="ecb4593166f9f60673464128045ab7624dd0b44298fc270cbc12c0309e7ab495" exitCode=0 Oct 08 15:51:39 crc kubenswrapper[4624]: I1008 15:51:39.738472 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb9n" event={"ID":"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad","Type":"ContainerDied","Data":"ecb4593166f9f60673464128045ab7624dd0b44298fc270cbc12c0309e7ab495"} Oct 08 15:51:39 crc kubenswrapper[4624]: I1008 15:51:39.742616 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 15:51:41 crc kubenswrapper[4624]: I1008 15:51:41.771501 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb9n" event={"ID":"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad","Type":"ContainerStarted","Data":"668e7cced9584ca1a0f6ec1583aacbdcc91648da68403dda6245bd6fd1669e8c"} Oct 08 15:51:42 crc kubenswrapper[4624]: I1008 15:51:42.785121 4624 generic.go:334] "Generic (PLEG): container finished" podID="7ecc6ad1-a516-4c05-a563-3b1a91cba9ad" containerID="668e7cced9584ca1a0f6ec1583aacbdcc91648da68403dda6245bd6fd1669e8c" exitCode=0 Oct 08 15:51:42 crc kubenswrapper[4624]: I1008 15:51:42.785205 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb9n" event={"ID":"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad","Type":"ContainerDied","Data":"668e7cced9584ca1a0f6ec1583aacbdcc91648da68403dda6245bd6fd1669e8c"} Oct 08 15:51:43 crc kubenswrapper[4624]: I1008 15:51:43.804549 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb9n" event={"ID":"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad","Type":"ContainerStarted","Data":"27766daaa629c2f8cbc9ac8c3fc076515726072b094207eccec542ab52676862"} Oct 08 15:51:43 crc kubenswrapper[4624]: I1008 15:51:43.838211 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wxb9n" podStartSLOduration=3.252175574 podStartE2EDuration="6.838165116s" podCreationTimestamp="2025-10-08 15:51:37 +0000 UTC" firstStartedPulling="2025-10-08 15:51:39.742393581 +0000 UTC m=+5324.893328658" lastFinishedPulling="2025-10-08 15:51:43.328383123 +0000 UTC m=+5328.479318200" observedRunningTime="2025-10-08 15:51:43.826117284 +0000 UTC m=+5328.977052361" watchObservedRunningTime="2025-10-08 15:51:43.838165116 +0000 UTC m=+5328.989100193" Oct 08 15:51:47 crc kubenswrapper[4624]: I1008 15:51:47.932700 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wxb9n" Oct 08 15:51:47 crc kubenswrapper[4624]: I1008 15:51:47.933286 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wxb9n" Oct 08 15:51:48 crc kubenswrapper[4624]: I1008 15:51:48.986339 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wxb9n" podUID="7ecc6ad1-a516-4c05-a563-3b1a91cba9ad" containerName="registry-server" probeResult="failure" output=< Oct 08 15:51:48 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:51:48 crc kubenswrapper[4624]: > Oct 08 15:51:50 crc kubenswrapper[4624]: I1008 15:51:50.467512 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:51:50 crc kubenswrapper[4624]: E1008 15:51:50.468386 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:51:57 crc kubenswrapper[4624]: I1008 15:51:57.981131 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wxb9n" Oct 08 15:51:58 crc kubenswrapper[4624]: I1008 15:51:58.047373 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wxb9n" Oct 08 15:51:58 crc kubenswrapper[4624]: I1008 15:51:58.216812 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wxb9n"] Oct 08 15:51:59 crc kubenswrapper[4624]: I1008 15:51:59.982328 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wxb9n" podUID="7ecc6ad1-a516-4c05-a563-3b1a91cba9ad" containerName="registry-server" containerID="cri-o://27766daaa629c2f8cbc9ac8c3fc076515726072b094207eccec542ab52676862" gracePeriod=2 Oct 08 15:52:00 crc kubenswrapper[4624]: I1008 15:52:00.632957 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxb9n" Oct 08 15:52:00 crc kubenswrapper[4624]: I1008 15:52:00.758823 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad-catalog-content\") pod \"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad\" (UID: \"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad\") " Oct 08 15:52:00 crc kubenswrapper[4624]: I1008 15:52:00.758868 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf2jq\" (UniqueName: \"kubernetes.io/projected/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad-kube-api-access-zf2jq\") pod \"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad\" (UID: \"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad\") " Oct 08 15:52:00 crc kubenswrapper[4624]: I1008 15:52:00.759009 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad-utilities\") pod \"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad\" (UID: \"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad\") " Oct 08 15:52:00 crc kubenswrapper[4624]: I1008 15:52:00.759563 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad-utilities" (OuterVolumeSpecName: "utilities") pod "7ecc6ad1-a516-4c05-a563-3b1a91cba9ad" (UID: "7ecc6ad1-a516-4c05-a563-3b1a91cba9ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:52:00 crc kubenswrapper[4624]: I1008 15:52:00.771306 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad-kube-api-access-zf2jq" (OuterVolumeSpecName: "kube-api-access-zf2jq") pod "7ecc6ad1-a516-4c05-a563-3b1a91cba9ad" (UID: "7ecc6ad1-a516-4c05-a563-3b1a91cba9ad"). InnerVolumeSpecName "kube-api-access-zf2jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:52:00 crc kubenswrapper[4624]: I1008 15:52:00.810616 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ecc6ad1-a516-4c05-a563-3b1a91cba9ad" (UID: "7ecc6ad1-a516-4c05-a563-3b1a91cba9ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:52:00 crc kubenswrapper[4624]: I1008 15:52:00.861428 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:52:00 crc kubenswrapper[4624]: I1008 15:52:00.861478 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf2jq\" (UniqueName: \"kubernetes.io/projected/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad-kube-api-access-zf2jq\") on node \"crc\" DevicePath \"\"" Oct 08 15:52:00 crc kubenswrapper[4624]: I1008 15:52:00.861492 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:52:00 crc kubenswrapper[4624]: I1008 15:52:00.995222 4624 generic.go:334] "Generic (PLEG): container finished" podID="7ecc6ad1-a516-4c05-a563-3b1a91cba9ad" containerID="27766daaa629c2f8cbc9ac8c3fc076515726072b094207eccec542ab52676862" exitCode=0 Oct 08 15:52:00 crc kubenswrapper[4624]: I1008 15:52:00.995298 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb9n" event={"ID":"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad","Type":"ContainerDied","Data":"27766daaa629c2f8cbc9ac8c3fc076515726072b094207eccec542ab52676862"} Oct 08 15:52:00 crc kubenswrapper[4624]: I1008 15:52:00.995354 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxb9n" event={"ID":"7ecc6ad1-a516-4c05-a563-3b1a91cba9ad","Type":"ContainerDied","Data":"7ef1c091eb1a2271ccf9de1cf77d6356c2b9f73732745d30987348dafdd5f1a3"} Oct 08 15:52:00 crc kubenswrapper[4624]: I1008 15:52:00.995380 4624 scope.go:117] "RemoveContainer" containerID="27766daaa629c2f8cbc9ac8c3fc076515726072b094207eccec542ab52676862" Oct 08 15:52:00 crc kubenswrapper[4624]: I1008 15:52:00.996332 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxb9n" Oct 08 15:52:01 crc kubenswrapper[4624]: I1008 15:52:01.030095 4624 scope.go:117] "RemoveContainer" containerID="668e7cced9584ca1a0f6ec1583aacbdcc91648da68403dda6245bd6fd1669e8c" Oct 08 15:52:01 crc kubenswrapper[4624]: I1008 15:52:01.034723 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wxb9n"] Oct 08 15:52:01 crc kubenswrapper[4624]: I1008 15:52:01.047651 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wxb9n"] Oct 08 15:52:01 crc kubenswrapper[4624]: I1008 15:52:01.058065 4624 scope.go:117] "RemoveContainer" containerID="ecb4593166f9f60673464128045ab7624dd0b44298fc270cbc12c0309e7ab495" Oct 08 15:52:01 crc kubenswrapper[4624]: I1008 15:52:01.112691 4624 scope.go:117] "RemoveContainer" containerID="27766daaa629c2f8cbc9ac8c3fc076515726072b094207eccec542ab52676862" Oct 08 15:52:01 crc kubenswrapper[4624]: E1008 15:52:01.113198 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27766daaa629c2f8cbc9ac8c3fc076515726072b094207eccec542ab52676862\": container with ID starting with 27766daaa629c2f8cbc9ac8c3fc076515726072b094207eccec542ab52676862 not found: ID does not exist" containerID="27766daaa629c2f8cbc9ac8c3fc076515726072b094207eccec542ab52676862" Oct 08 15:52:01 crc kubenswrapper[4624]: I1008 15:52:01.113314 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27766daaa629c2f8cbc9ac8c3fc076515726072b094207eccec542ab52676862"} err="failed to get container status \"27766daaa629c2f8cbc9ac8c3fc076515726072b094207eccec542ab52676862\": rpc error: code = NotFound desc = could not find container \"27766daaa629c2f8cbc9ac8c3fc076515726072b094207eccec542ab52676862\": container with ID starting with 27766daaa629c2f8cbc9ac8c3fc076515726072b094207eccec542ab52676862 not found: ID does not exist" Oct 08 15:52:01 crc kubenswrapper[4624]: I1008 15:52:01.113405 4624 scope.go:117] "RemoveContainer" containerID="668e7cced9584ca1a0f6ec1583aacbdcc91648da68403dda6245bd6fd1669e8c" Oct 08 15:52:01 crc kubenswrapper[4624]: E1008 15:52:01.113878 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"668e7cced9584ca1a0f6ec1583aacbdcc91648da68403dda6245bd6fd1669e8c\": container with ID starting with 668e7cced9584ca1a0f6ec1583aacbdcc91648da68403dda6245bd6fd1669e8c not found: ID does not exist" containerID="668e7cced9584ca1a0f6ec1583aacbdcc91648da68403dda6245bd6fd1669e8c" Oct 08 15:52:01 crc kubenswrapper[4624]: I1008 15:52:01.113912 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"668e7cced9584ca1a0f6ec1583aacbdcc91648da68403dda6245bd6fd1669e8c"} err="failed to get container status \"668e7cced9584ca1a0f6ec1583aacbdcc91648da68403dda6245bd6fd1669e8c\": rpc error: code = NotFound desc = could not find container \"668e7cced9584ca1a0f6ec1583aacbdcc91648da68403dda6245bd6fd1669e8c\": container with ID starting with 668e7cced9584ca1a0f6ec1583aacbdcc91648da68403dda6245bd6fd1669e8c not found: ID does not exist" Oct 08 15:52:01 crc kubenswrapper[4624]: I1008 15:52:01.113938 4624 scope.go:117] "RemoveContainer" containerID="ecb4593166f9f60673464128045ab7624dd0b44298fc270cbc12c0309e7ab495" Oct 08 15:52:01 crc kubenswrapper[4624]: E1008 15:52:01.114281 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecb4593166f9f60673464128045ab7624dd0b44298fc270cbc12c0309e7ab495\": container with ID starting with ecb4593166f9f60673464128045ab7624dd0b44298fc270cbc12c0309e7ab495 not found: ID does not exist" containerID="ecb4593166f9f60673464128045ab7624dd0b44298fc270cbc12c0309e7ab495" Oct 08 15:52:01 crc kubenswrapper[4624]: I1008 15:52:01.114347 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecb4593166f9f60673464128045ab7624dd0b44298fc270cbc12c0309e7ab495"} err="failed to get container status \"ecb4593166f9f60673464128045ab7624dd0b44298fc270cbc12c0309e7ab495\": rpc error: code = NotFound desc = could not find container \"ecb4593166f9f60673464128045ab7624dd0b44298fc270cbc12c0309e7ab495\": container with ID starting with ecb4593166f9f60673464128045ab7624dd0b44298fc270cbc12c0309e7ab495 not found: ID does not exist" Oct 08 15:52:01 crc kubenswrapper[4624]: I1008 15:52:01.478671 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ecc6ad1-a516-4c05-a563-3b1a91cba9ad" path="/var/lib/kubelet/pods/7ecc6ad1-a516-4c05-a563-3b1a91cba9ad/volumes" Oct 08 15:52:03 crc kubenswrapper[4624]: I1008 15:52:03.467526 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:52:04 crc kubenswrapper[4624]: I1008 15:52:04.031571 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"7eb266fb63276346e90c3d262062f36a0e1fc23f0d47a193b62ecac589dab28d"} Oct 08 15:54:05 crc kubenswrapper[4624]: I1008 15:54:05.528274 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xhd2r"] Oct 08 15:54:05 crc kubenswrapper[4624]: E1008 15:54:05.529695 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ecc6ad1-a516-4c05-a563-3b1a91cba9ad" containerName="extract-utilities" Oct 08 15:54:05 crc kubenswrapper[4624]: I1008 15:54:05.529715 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ecc6ad1-a516-4c05-a563-3b1a91cba9ad" containerName="extract-utilities" Oct 08 15:54:05 crc kubenswrapper[4624]: E1008 15:54:05.529757 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ecc6ad1-a516-4c05-a563-3b1a91cba9ad" containerName="registry-server" Oct 08 15:54:05 crc kubenswrapper[4624]: I1008 15:54:05.529762 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ecc6ad1-a516-4c05-a563-3b1a91cba9ad" containerName="registry-server" Oct 08 15:54:05 crc kubenswrapper[4624]: E1008 15:54:05.529775 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ecc6ad1-a516-4c05-a563-3b1a91cba9ad" containerName="extract-content" Oct 08 15:54:05 crc kubenswrapper[4624]: I1008 15:54:05.529781 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ecc6ad1-a516-4c05-a563-3b1a91cba9ad" containerName="extract-content" Oct 08 15:54:05 crc kubenswrapper[4624]: I1008 15:54:05.529967 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ecc6ad1-a516-4c05-a563-3b1a91cba9ad" containerName="registry-server" Oct 08 15:54:05 crc kubenswrapper[4624]: I1008 15:54:05.531230 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xhd2r" Oct 08 15:54:05 crc kubenswrapper[4624]: I1008 15:54:05.547856 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xhd2r"] Oct 08 15:54:05 crc kubenswrapper[4624]: I1008 15:54:05.607823 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be52dd92-2cda-49fd-ad64-bb5036470cfd-utilities\") pod \"certified-operators-xhd2r\" (UID: \"be52dd92-2cda-49fd-ad64-bb5036470cfd\") " pod="openshift-marketplace/certified-operators-xhd2r" Oct 08 15:54:05 crc kubenswrapper[4624]: I1008 15:54:05.607976 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be52dd92-2cda-49fd-ad64-bb5036470cfd-catalog-content\") pod \"certified-operators-xhd2r\" (UID: \"be52dd92-2cda-49fd-ad64-bb5036470cfd\") " pod="openshift-marketplace/certified-operators-xhd2r" Oct 08 15:54:05 crc kubenswrapper[4624]: I1008 15:54:05.608003 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l94hx\" (UniqueName: \"kubernetes.io/projected/be52dd92-2cda-49fd-ad64-bb5036470cfd-kube-api-access-l94hx\") pod \"certified-operators-xhd2r\" (UID: \"be52dd92-2cda-49fd-ad64-bb5036470cfd\") " pod="openshift-marketplace/certified-operators-xhd2r" Oct 08 15:54:05 crc kubenswrapper[4624]: I1008 15:54:05.709918 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be52dd92-2cda-49fd-ad64-bb5036470cfd-utilities\") pod \"certified-operators-xhd2r\" (UID: \"be52dd92-2cda-49fd-ad64-bb5036470cfd\") " pod="openshift-marketplace/certified-operators-xhd2r" Oct 08 15:54:05 crc kubenswrapper[4624]: I1008 15:54:05.709998 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be52dd92-2cda-49fd-ad64-bb5036470cfd-catalog-content\") pod \"certified-operators-xhd2r\" (UID: \"be52dd92-2cda-49fd-ad64-bb5036470cfd\") " pod="openshift-marketplace/certified-operators-xhd2r" Oct 08 15:54:05 crc kubenswrapper[4624]: I1008 15:54:05.710021 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l94hx\" (UniqueName: \"kubernetes.io/projected/be52dd92-2cda-49fd-ad64-bb5036470cfd-kube-api-access-l94hx\") pod \"certified-operators-xhd2r\" (UID: \"be52dd92-2cda-49fd-ad64-bb5036470cfd\") " pod="openshift-marketplace/certified-operators-xhd2r" Oct 08 15:54:05 crc kubenswrapper[4624]: I1008 15:54:05.710838 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be52dd92-2cda-49fd-ad64-bb5036470cfd-utilities\") pod \"certified-operators-xhd2r\" (UID: \"be52dd92-2cda-49fd-ad64-bb5036470cfd\") " pod="openshift-marketplace/certified-operators-xhd2r" Oct 08 15:54:05 crc kubenswrapper[4624]: I1008 15:54:05.711053 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be52dd92-2cda-49fd-ad64-bb5036470cfd-catalog-content\") pod \"certified-operators-xhd2r\" (UID: \"be52dd92-2cda-49fd-ad64-bb5036470cfd\") " pod="openshift-marketplace/certified-operators-xhd2r" Oct 08 15:54:05 crc kubenswrapper[4624]: I1008 15:54:05.738282 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l94hx\" (UniqueName: \"kubernetes.io/projected/be52dd92-2cda-49fd-ad64-bb5036470cfd-kube-api-access-l94hx\") pod \"certified-operators-xhd2r\" (UID: \"be52dd92-2cda-49fd-ad64-bb5036470cfd\") " pod="openshift-marketplace/certified-operators-xhd2r" Oct 08 15:54:05 crc kubenswrapper[4624]: I1008 15:54:05.851454 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xhd2r" Oct 08 15:54:06 crc kubenswrapper[4624]: I1008 15:54:06.425851 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xhd2r"] Oct 08 15:54:07 crc kubenswrapper[4624]: I1008 15:54:07.206673 4624 generic.go:334] "Generic (PLEG): container finished" podID="be52dd92-2cda-49fd-ad64-bb5036470cfd" containerID="a5387585d28c0bd1616fc29d64ad91e28d74946af9e15b9c044ae885b53cb922" exitCode=0 Oct 08 15:54:07 crc kubenswrapper[4624]: I1008 15:54:07.206784 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhd2r" event={"ID":"be52dd92-2cda-49fd-ad64-bb5036470cfd","Type":"ContainerDied","Data":"a5387585d28c0bd1616fc29d64ad91e28d74946af9e15b9c044ae885b53cb922"} Oct 08 15:54:07 crc kubenswrapper[4624]: I1008 15:54:07.206987 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhd2r" event={"ID":"be52dd92-2cda-49fd-ad64-bb5036470cfd","Type":"ContainerStarted","Data":"3ec87cc96cdec67403a6ab6d52d58a27168022f703eaee1ff5f6ed9d79c2b4ae"} Oct 08 15:54:09 crc kubenswrapper[4624]: I1008 15:54:09.231944 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhd2r" event={"ID":"be52dd92-2cda-49fd-ad64-bb5036470cfd","Type":"ContainerStarted","Data":"fe4b22118e3ea66b94d897b880e075373fba14f0361dfe9eda7f67d166c24925"} Oct 08 15:54:10 crc kubenswrapper[4624]: I1008 15:54:10.247272 4624 generic.go:334] "Generic (PLEG): container finished" podID="be52dd92-2cda-49fd-ad64-bb5036470cfd" containerID="fe4b22118e3ea66b94d897b880e075373fba14f0361dfe9eda7f67d166c24925" exitCode=0 Oct 08 15:54:10 crc kubenswrapper[4624]: I1008 15:54:10.247312 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhd2r" event={"ID":"be52dd92-2cda-49fd-ad64-bb5036470cfd","Type":"ContainerDied","Data":"fe4b22118e3ea66b94d897b880e075373fba14f0361dfe9eda7f67d166c24925"} Oct 08 15:54:11 crc kubenswrapper[4624]: I1008 15:54:11.262055 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhd2r" event={"ID":"be52dd92-2cda-49fd-ad64-bb5036470cfd","Type":"ContainerStarted","Data":"bef912a5515df08d4466c0464da1c1e2904f91f1fb97eddc73a05164901265e4"} Oct 08 15:54:11 crc kubenswrapper[4624]: I1008 15:54:11.285807 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xhd2r" podStartSLOduration=2.804950601 podStartE2EDuration="6.285762351s" podCreationTimestamp="2025-10-08 15:54:05 +0000 UTC" firstStartedPulling="2025-10-08 15:54:07.210354053 +0000 UTC m=+5472.361289130" lastFinishedPulling="2025-10-08 15:54:10.691165803 +0000 UTC m=+5475.842100880" observedRunningTime="2025-10-08 15:54:11.284478418 +0000 UTC m=+5476.435413515" watchObservedRunningTime="2025-10-08 15:54:11.285762351 +0000 UTC m=+5476.436697428" Oct 08 15:54:14 crc kubenswrapper[4624]: I1008 15:54:14.658135 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pndpt"] Oct 08 15:54:14 crc kubenswrapper[4624]: I1008 15:54:14.660850 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pndpt" Oct 08 15:54:14 crc kubenswrapper[4624]: I1008 15:54:14.688110 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pndpt"] Oct 08 15:54:14 crc kubenswrapper[4624]: I1008 15:54:14.707352 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2671b836-bfa6-4461-a53a-12f321ad98c0-catalog-content\") pod \"redhat-operators-pndpt\" (UID: \"2671b836-bfa6-4461-a53a-12f321ad98c0\") " pod="openshift-marketplace/redhat-operators-pndpt" Oct 08 15:54:14 crc kubenswrapper[4624]: I1008 15:54:14.707471 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shg2t\" (UniqueName: \"kubernetes.io/projected/2671b836-bfa6-4461-a53a-12f321ad98c0-kube-api-access-shg2t\") pod \"redhat-operators-pndpt\" (UID: \"2671b836-bfa6-4461-a53a-12f321ad98c0\") " pod="openshift-marketplace/redhat-operators-pndpt" Oct 08 15:54:14 crc kubenswrapper[4624]: I1008 15:54:14.708561 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2671b836-bfa6-4461-a53a-12f321ad98c0-utilities\") pod \"redhat-operators-pndpt\" (UID: \"2671b836-bfa6-4461-a53a-12f321ad98c0\") " pod="openshift-marketplace/redhat-operators-pndpt" Oct 08 15:54:14 crc kubenswrapper[4624]: I1008 15:54:14.810710 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shg2t\" (UniqueName: \"kubernetes.io/projected/2671b836-bfa6-4461-a53a-12f321ad98c0-kube-api-access-shg2t\") pod \"redhat-operators-pndpt\" (UID: \"2671b836-bfa6-4461-a53a-12f321ad98c0\") " pod="openshift-marketplace/redhat-operators-pndpt" Oct 08 15:54:14 crc kubenswrapper[4624]: I1008 15:54:14.810782 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2671b836-bfa6-4461-a53a-12f321ad98c0-utilities\") pod \"redhat-operators-pndpt\" (UID: \"2671b836-bfa6-4461-a53a-12f321ad98c0\") " pod="openshift-marketplace/redhat-operators-pndpt" Oct 08 15:54:14 crc kubenswrapper[4624]: I1008 15:54:14.810883 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2671b836-bfa6-4461-a53a-12f321ad98c0-catalog-content\") pod \"redhat-operators-pndpt\" (UID: \"2671b836-bfa6-4461-a53a-12f321ad98c0\") " pod="openshift-marketplace/redhat-operators-pndpt" Oct 08 15:54:14 crc kubenswrapper[4624]: I1008 15:54:14.811416 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2671b836-bfa6-4461-a53a-12f321ad98c0-catalog-content\") pod \"redhat-operators-pndpt\" (UID: \"2671b836-bfa6-4461-a53a-12f321ad98c0\") " pod="openshift-marketplace/redhat-operators-pndpt" Oct 08 15:54:14 crc kubenswrapper[4624]: I1008 15:54:14.811509 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2671b836-bfa6-4461-a53a-12f321ad98c0-utilities\") pod \"redhat-operators-pndpt\" (UID: \"2671b836-bfa6-4461-a53a-12f321ad98c0\") " pod="openshift-marketplace/redhat-operators-pndpt" Oct 08 15:54:14 crc kubenswrapper[4624]: I1008 15:54:14.834231 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shg2t\" (UniqueName: \"kubernetes.io/projected/2671b836-bfa6-4461-a53a-12f321ad98c0-kube-api-access-shg2t\") pod \"redhat-operators-pndpt\" (UID: \"2671b836-bfa6-4461-a53a-12f321ad98c0\") " pod="openshift-marketplace/redhat-operators-pndpt" Oct 08 15:54:14 crc kubenswrapper[4624]: I1008 15:54:14.980874 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pndpt" Oct 08 15:54:15 crc kubenswrapper[4624]: I1008 15:54:15.512127 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pndpt"] Oct 08 15:54:15 crc kubenswrapper[4624]: I1008 15:54:15.852620 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xhd2r" Oct 08 15:54:15 crc kubenswrapper[4624]: I1008 15:54:15.853014 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xhd2r" Oct 08 15:54:16 crc kubenswrapper[4624]: I1008 15:54:16.310394 4624 generic.go:334] "Generic (PLEG): container finished" podID="2671b836-bfa6-4461-a53a-12f321ad98c0" containerID="bc0cea7495bd1d2318a38a8156f6ff04df1cc4c6ca5f966abc2493e641c42db3" exitCode=0 Oct 08 15:54:16 crc kubenswrapper[4624]: I1008 15:54:16.310442 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pndpt" event={"ID":"2671b836-bfa6-4461-a53a-12f321ad98c0","Type":"ContainerDied","Data":"bc0cea7495bd1d2318a38a8156f6ff04df1cc4c6ca5f966abc2493e641c42db3"} Oct 08 15:54:16 crc kubenswrapper[4624]: I1008 15:54:16.310471 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pndpt" event={"ID":"2671b836-bfa6-4461-a53a-12f321ad98c0","Type":"ContainerStarted","Data":"727a6eccbcec888bde77eed878ea74272cbc87bfa965b1e837d74a38287c9e62"} Oct 08 15:54:16 crc kubenswrapper[4624]: I1008 15:54:16.902085 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xhd2r" podUID="be52dd92-2cda-49fd-ad64-bb5036470cfd" containerName="registry-server" probeResult="failure" output=< Oct 08 15:54:16 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:54:16 crc kubenswrapper[4624]: > Oct 08 15:54:17 crc kubenswrapper[4624]: I1008 15:54:17.324077 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pndpt" event={"ID":"2671b836-bfa6-4461-a53a-12f321ad98c0","Type":"ContainerStarted","Data":"0ee12dbc4c474867b150c5d5bb26096d5c5c0ce18417cfbef8113d41e72ff77d"} Oct 08 15:54:21 crc kubenswrapper[4624]: I1008 15:54:21.362220 4624 generic.go:334] "Generic (PLEG): container finished" podID="2671b836-bfa6-4461-a53a-12f321ad98c0" containerID="0ee12dbc4c474867b150c5d5bb26096d5c5c0ce18417cfbef8113d41e72ff77d" exitCode=0 Oct 08 15:54:21 crc kubenswrapper[4624]: I1008 15:54:21.362755 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pndpt" event={"ID":"2671b836-bfa6-4461-a53a-12f321ad98c0","Type":"ContainerDied","Data":"0ee12dbc4c474867b150c5d5bb26096d5c5c0ce18417cfbef8113d41e72ff77d"} Oct 08 15:54:22 crc kubenswrapper[4624]: I1008 15:54:22.376859 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pndpt" event={"ID":"2671b836-bfa6-4461-a53a-12f321ad98c0","Type":"ContainerStarted","Data":"937840e7ae66f070fa1847f9c602164e7f3a96de1da9c960fc0d10e50e67bee9"} Oct 08 15:54:22 crc kubenswrapper[4624]: I1008 15:54:22.410381 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pndpt" podStartSLOduration=2.951952738 podStartE2EDuration="8.410335945s" podCreationTimestamp="2025-10-08 15:54:14 +0000 UTC" firstStartedPulling="2025-10-08 15:54:16.312420987 +0000 UTC m=+5481.463356064" lastFinishedPulling="2025-10-08 15:54:21.770804194 +0000 UTC m=+5486.921739271" observedRunningTime="2025-10-08 15:54:22.395749047 +0000 UTC m=+5487.546684144" watchObservedRunningTime="2025-10-08 15:54:22.410335945 +0000 UTC m=+5487.561271022" Oct 08 15:54:24 crc kubenswrapper[4624]: I1008 15:54:24.981900 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pndpt" Oct 08 15:54:24 crc kubenswrapper[4624]: I1008 15:54:24.983384 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pndpt" Oct 08 15:54:25 crc kubenswrapper[4624]: I1008 15:54:25.902108 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xhd2r" Oct 08 15:54:25 crc kubenswrapper[4624]: I1008 15:54:25.953976 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xhd2r" Oct 08 15:54:26 crc kubenswrapper[4624]: I1008 15:54:26.026300 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pndpt" podUID="2671b836-bfa6-4461-a53a-12f321ad98c0" containerName="registry-server" probeResult="failure" output=< Oct 08 15:54:26 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:54:26 crc kubenswrapper[4624]: > Oct 08 15:54:26 crc kubenswrapper[4624]: I1008 15:54:26.138234 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xhd2r"] Oct 08 15:54:27 crc kubenswrapper[4624]: I1008 15:54:27.416357 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xhd2r" podUID="be52dd92-2cda-49fd-ad64-bb5036470cfd" containerName="registry-server" containerID="cri-o://bef912a5515df08d4466c0464da1c1e2904f91f1fb97eddc73a05164901265e4" gracePeriod=2 Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.032035 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xhd2r" Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.083889 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be52dd92-2cda-49fd-ad64-bb5036470cfd-utilities\") pod \"be52dd92-2cda-49fd-ad64-bb5036470cfd\" (UID: \"be52dd92-2cda-49fd-ad64-bb5036470cfd\") " Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.084119 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be52dd92-2cda-49fd-ad64-bb5036470cfd-catalog-content\") pod \"be52dd92-2cda-49fd-ad64-bb5036470cfd\" (UID: \"be52dd92-2cda-49fd-ad64-bb5036470cfd\") " Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.084296 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l94hx\" (UniqueName: \"kubernetes.io/projected/be52dd92-2cda-49fd-ad64-bb5036470cfd-kube-api-access-l94hx\") pod \"be52dd92-2cda-49fd-ad64-bb5036470cfd\" (UID: \"be52dd92-2cda-49fd-ad64-bb5036470cfd\") " Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.084854 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be52dd92-2cda-49fd-ad64-bb5036470cfd-utilities" (OuterVolumeSpecName: "utilities") pod "be52dd92-2cda-49fd-ad64-bb5036470cfd" (UID: "be52dd92-2cda-49fd-ad64-bb5036470cfd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.099215 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be52dd92-2cda-49fd-ad64-bb5036470cfd-kube-api-access-l94hx" (OuterVolumeSpecName: "kube-api-access-l94hx") pod "be52dd92-2cda-49fd-ad64-bb5036470cfd" (UID: "be52dd92-2cda-49fd-ad64-bb5036470cfd"). InnerVolumeSpecName "kube-api-access-l94hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.141886 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be52dd92-2cda-49fd-ad64-bb5036470cfd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be52dd92-2cda-49fd-ad64-bb5036470cfd" (UID: "be52dd92-2cda-49fd-ad64-bb5036470cfd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.186354 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be52dd92-2cda-49fd-ad64-bb5036470cfd-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.186384 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be52dd92-2cda-49fd-ad64-bb5036470cfd-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.186396 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l94hx\" (UniqueName: \"kubernetes.io/projected/be52dd92-2cda-49fd-ad64-bb5036470cfd-kube-api-access-l94hx\") on node \"crc\" DevicePath \"\"" Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.430901 4624 generic.go:334] "Generic (PLEG): container finished" podID="be52dd92-2cda-49fd-ad64-bb5036470cfd" containerID="bef912a5515df08d4466c0464da1c1e2904f91f1fb97eddc73a05164901265e4" exitCode=0 Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.431008 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xhd2r" Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.431020 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhd2r" event={"ID":"be52dd92-2cda-49fd-ad64-bb5036470cfd","Type":"ContainerDied","Data":"bef912a5515df08d4466c0464da1c1e2904f91f1fb97eddc73a05164901265e4"} Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.431091 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhd2r" event={"ID":"be52dd92-2cda-49fd-ad64-bb5036470cfd","Type":"ContainerDied","Data":"3ec87cc96cdec67403a6ab6d52d58a27168022f703eaee1ff5f6ed9d79c2b4ae"} Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.431117 4624 scope.go:117] "RemoveContainer" containerID="bef912a5515df08d4466c0464da1c1e2904f91f1fb97eddc73a05164901265e4" Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.471142 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xhd2r"] Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.480903 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xhd2r"] Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.494490 4624 scope.go:117] "RemoveContainer" containerID="fe4b22118e3ea66b94d897b880e075373fba14f0361dfe9eda7f67d166c24925" Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.523652 4624 scope.go:117] "RemoveContainer" containerID="a5387585d28c0bd1616fc29d64ad91e28d74946af9e15b9c044ae885b53cb922" Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.567969 4624 scope.go:117] "RemoveContainer" containerID="bef912a5515df08d4466c0464da1c1e2904f91f1fb97eddc73a05164901265e4" Oct 08 15:54:28 crc kubenswrapper[4624]: E1008 15:54:28.569273 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bef912a5515df08d4466c0464da1c1e2904f91f1fb97eddc73a05164901265e4\": container with ID starting with bef912a5515df08d4466c0464da1c1e2904f91f1fb97eddc73a05164901265e4 not found: ID does not exist" containerID="bef912a5515df08d4466c0464da1c1e2904f91f1fb97eddc73a05164901265e4" Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.569341 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bef912a5515df08d4466c0464da1c1e2904f91f1fb97eddc73a05164901265e4"} err="failed to get container status \"bef912a5515df08d4466c0464da1c1e2904f91f1fb97eddc73a05164901265e4\": rpc error: code = NotFound desc = could not find container \"bef912a5515df08d4466c0464da1c1e2904f91f1fb97eddc73a05164901265e4\": container with ID starting with bef912a5515df08d4466c0464da1c1e2904f91f1fb97eddc73a05164901265e4 not found: ID does not exist" Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.569375 4624 scope.go:117] "RemoveContainer" containerID="fe4b22118e3ea66b94d897b880e075373fba14f0361dfe9eda7f67d166c24925" Oct 08 15:54:28 crc kubenswrapper[4624]: E1008 15:54:28.569996 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4b22118e3ea66b94d897b880e075373fba14f0361dfe9eda7f67d166c24925\": container with ID starting with fe4b22118e3ea66b94d897b880e075373fba14f0361dfe9eda7f67d166c24925 not found: ID does not exist" containerID="fe4b22118e3ea66b94d897b880e075373fba14f0361dfe9eda7f67d166c24925" Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.570032 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4b22118e3ea66b94d897b880e075373fba14f0361dfe9eda7f67d166c24925"} err="failed to get container status \"fe4b22118e3ea66b94d897b880e075373fba14f0361dfe9eda7f67d166c24925\": rpc error: code = NotFound desc = could not find container \"fe4b22118e3ea66b94d897b880e075373fba14f0361dfe9eda7f67d166c24925\": container with ID starting with fe4b22118e3ea66b94d897b880e075373fba14f0361dfe9eda7f67d166c24925 not found: ID does not exist" Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.570056 4624 scope.go:117] "RemoveContainer" containerID="a5387585d28c0bd1616fc29d64ad91e28d74946af9e15b9c044ae885b53cb922" Oct 08 15:54:28 crc kubenswrapper[4624]: E1008 15:54:28.570370 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5387585d28c0bd1616fc29d64ad91e28d74946af9e15b9c044ae885b53cb922\": container with ID starting with a5387585d28c0bd1616fc29d64ad91e28d74946af9e15b9c044ae885b53cb922 not found: ID does not exist" containerID="a5387585d28c0bd1616fc29d64ad91e28d74946af9e15b9c044ae885b53cb922" Oct 08 15:54:28 crc kubenswrapper[4624]: I1008 15:54:28.570390 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5387585d28c0bd1616fc29d64ad91e28d74946af9e15b9c044ae885b53cb922"} err="failed to get container status \"a5387585d28c0bd1616fc29d64ad91e28d74946af9e15b9c044ae885b53cb922\": rpc error: code = NotFound desc = could not find container \"a5387585d28c0bd1616fc29d64ad91e28d74946af9e15b9c044ae885b53cb922\": container with ID starting with a5387585d28c0bd1616fc29d64ad91e28d74946af9e15b9c044ae885b53cb922 not found: ID does not exist" Oct 08 15:54:29 crc kubenswrapper[4624]: I1008 15:54:29.476095 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be52dd92-2cda-49fd-ad64-bb5036470cfd" path="/var/lib/kubelet/pods/be52dd92-2cda-49fd-ad64-bb5036470cfd/volumes" Oct 08 15:54:30 crc kubenswrapper[4624]: I1008 15:54:30.076526 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:54:30 crc kubenswrapper[4624]: I1008 15:54:30.076593 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:54:36 crc kubenswrapper[4624]: I1008 15:54:36.028307 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pndpt" podUID="2671b836-bfa6-4461-a53a-12f321ad98c0" containerName="registry-server" probeResult="failure" output=< Oct 08 15:54:36 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 15:54:36 crc kubenswrapper[4624]: > Oct 08 15:54:45 crc kubenswrapper[4624]: I1008 15:54:45.032321 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pndpt" Oct 08 15:54:45 crc kubenswrapper[4624]: I1008 15:54:45.085093 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pndpt" Oct 08 15:54:45 crc kubenswrapper[4624]: I1008 15:54:45.860295 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pndpt"] Oct 08 15:54:46 crc kubenswrapper[4624]: I1008 15:54:46.594799 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pndpt" podUID="2671b836-bfa6-4461-a53a-12f321ad98c0" containerName="registry-server" containerID="cri-o://937840e7ae66f070fa1847f9c602164e7f3a96de1da9c960fc0d10e50e67bee9" gracePeriod=2 Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.097128 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pndpt" Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.169585 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2671b836-bfa6-4461-a53a-12f321ad98c0-catalog-content\") pod \"2671b836-bfa6-4461-a53a-12f321ad98c0\" (UID: \"2671b836-bfa6-4461-a53a-12f321ad98c0\") " Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.169738 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2671b836-bfa6-4461-a53a-12f321ad98c0-utilities\") pod \"2671b836-bfa6-4461-a53a-12f321ad98c0\" (UID: \"2671b836-bfa6-4461-a53a-12f321ad98c0\") " Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.169867 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shg2t\" (UniqueName: \"kubernetes.io/projected/2671b836-bfa6-4461-a53a-12f321ad98c0-kube-api-access-shg2t\") pod \"2671b836-bfa6-4461-a53a-12f321ad98c0\" (UID: \"2671b836-bfa6-4461-a53a-12f321ad98c0\") " Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.170724 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2671b836-bfa6-4461-a53a-12f321ad98c0-utilities" (OuterVolumeSpecName: "utilities") pod "2671b836-bfa6-4461-a53a-12f321ad98c0" (UID: "2671b836-bfa6-4461-a53a-12f321ad98c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.175777 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2671b836-bfa6-4461-a53a-12f321ad98c0-kube-api-access-shg2t" (OuterVolumeSpecName: "kube-api-access-shg2t") pod "2671b836-bfa6-4461-a53a-12f321ad98c0" (UID: "2671b836-bfa6-4461-a53a-12f321ad98c0"). InnerVolumeSpecName "kube-api-access-shg2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.273256 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shg2t\" (UniqueName: \"kubernetes.io/projected/2671b836-bfa6-4461-a53a-12f321ad98c0-kube-api-access-shg2t\") on node \"crc\" DevicePath \"\"" Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.273291 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2671b836-bfa6-4461-a53a-12f321ad98c0-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.287132 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2671b836-bfa6-4461-a53a-12f321ad98c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2671b836-bfa6-4461-a53a-12f321ad98c0" (UID: "2671b836-bfa6-4461-a53a-12f321ad98c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.375248 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2671b836-bfa6-4461-a53a-12f321ad98c0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.605330 4624 generic.go:334] "Generic (PLEG): container finished" podID="2671b836-bfa6-4461-a53a-12f321ad98c0" containerID="937840e7ae66f070fa1847f9c602164e7f3a96de1da9c960fc0d10e50e67bee9" exitCode=0 Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.605372 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pndpt" event={"ID":"2671b836-bfa6-4461-a53a-12f321ad98c0","Type":"ContainerDied","Data":"937840e7ae66f070fa1847f9c602164e7f3a96de1da9c960fc0d10e50e67bee9"} Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.605397 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pndpt" event={"ID":"2671b836-bfa6-4461-a53a-12f321ad98c0","Type":"ContainerDied","Data":"727a6eccbcec888bde77eed878ea74272cbc87bfa965b1e837d74a38287c9e62"} Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.605417 4624 scope.go:117] "RemoveContainer" containerID="937840e7ae66f070fa1847f9c602164e7f3a96de1da9c960fc0d10e50e67bee9" Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.605533 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pndpt" Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.633817 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pndpt"] Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.639684 4624 scope.go:117] "RemoveContainer" containerID="0ee12dbc4c474867b150c5d5bb26096d5c5c0ce18417cfbef8113d41e72ff77d" Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.641196 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pndpt"] Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.665080 4624 scope.go:117] "RemoveContainer" containerID="bc0cea7495bd1d2318a38a8156f6ff04df1cc4c6ca5f966abc2493e641c42db3" Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.702505 4624 scope.go:117] "RemoveContainer" containerID="937840e7ae66f070fa1847f9c602164e7f3a96de1da9c960fc0d10e50e67bee9" Oct 08 15:54:47 crc kubenswrapper[4624]: E1008 15:54:47.703106 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"937840e7ae66f070fa1847f9c602164e7f3a96de1da9c960fc0d10e50e67bee9\": container with ID starting with 937840e7ae66f070fa1847f9c602164e7f3a96de1da9c960fc0d10e50e67bee9 not found: ID does not exist" containerID="937840e7ae66f070fa1847f9c602164e7f3a96de1da9c960fc0d10e50e67bee9" Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.703142 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"937840e7ae66f070fa1847f9c602164e7f3a96de1da9c960fc0d10e50e67bee9"} err="failed to get container status \"937840e7ae66f070fa1847f9c602164e7f3a96de1da9c960fc0d10e50e67bee9\": rpc error: code = NotFound desc = could not find container \"937840e7ae66f070fa1847f9c602164e7f3a96de1da9c960fc0d10e50e67bee9\": container with ID starting with 937840e7ae66f070fa1847f9c602164e7f3a96de1da9c960fc0d10e50e67bee9 not found: ID does not exist" Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.703167 4624 scope.go:117] "RemoveContainer" containerID="0ee12dbc4c474867b150c5d5bb26096d5c5c0ce18417cfbef8113d41e72ff77d" Oct 08 15:54:47 crc kubenswrapper[4624]: E1008 15:54:47.703494 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ee12dbc4c474867b150c5d5bb26096d5c5c0ce18417cfbef8113d41e72ff77d\": container with ID starting with 0ee12dbc4c474867b150c5d5bb26096d5c5c0ce18417cfbef8113d41e72ff77d not found: ID does not exist" containerID="0ee12dbc4c474867b150c5d5bb26096d5c5c0ce18417cfbef8113d41e72ff77d" Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.703534 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ee12dbc4c474867b150c5d5bb26096d5c5c0ce18417cfbef8113d41e72ff77d"} err="failed to get container status \"0ee12dbc4c474867b150c5d5bb26096d5c5c0ce18417cfbef8113d41e72ff77d\": rpc error: code = NotFound desc = could not find container \"0ee12dbc4c474867b150c5d5bb26096d5c5c0ce18417cfbef8113d41e72ff77d\": container with ID starting with 0ee12dbc4c474867b150c5d5bb26096d5c5c0ce18417cfbef8113d41e72ff77d not found: ID does not exist" Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.703551 4624 scope.go:117] "RemoveContainer" containerID="bc0cea7495bd1d2318a38a8156f6ff04df1cc4c6ca5f966abc2493e641c42db3" Oct 08 15:54:47 crc kubenswrapper[4624]: E1008 15:54:47.703894 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc0cea7495bd1d2318a38a8156f6ff04df1cc4c6ca5f966abc2493e641c42db3\": container with ID starting with bc0cea7495bd1d2318a38a8156f6ff04df1cc4c6ca5f966abc2493e641c42db3 not found: ID does not exist" containerID="bc0cea7495bd1d2318a38a8156f6ff04df1cc4c6ca5f966abc2493e641c42db3" Oct 08 15:54:47 crc kubenswrapper[4624]: I1008 15:54:47.703914 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc0cea7495bd1d2318a38a8156f6ff04df1cc4c6ca5f966abc2493e641c42db3"} err="failed to get container status \"bc0cea7495bd1d2318a38a8156f6ff04df1cc4c6ca5f966abc2493e641c42db3\": rpc error: code = NotFound desc = could not find container \"bc0cea7495bd1d2318a38a8156f6ff04df1cc4c6ca5f966abc2493e641c42db3\": container with ID starting with bc0cea7495bd1d2318a38a8156f6ff04df1cc4c6ca5f966abc2493e641c42db3 not found: ID does not exist" Oct 08 15:54:49 crc kubenswrapper[4624]: I1008 15:54:49.476850 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2671b836-bfa6-4461-a53a-12f321ad98c0" path="/var/lib/kubelet/pods/2671b836-bfa6-4461-a53a-12f321ad98c0/volumes" Oct 08 15:55:00 crc kubenswrapper[4624]: I1008 15:55:00.076268 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:55:00 crc kubenswrapper[4624]: I1008 15:55:00.076887 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:55:30 crc kubenswrapper[4624]: I1008 15:55:30.076756 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:55:30 crc kubenswrapper[4624]: I1008 15:55:30.077302 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:55:30 crc kubenswrapper[4624]: I1008 15:55:30.077343 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 15:55:30 crc kubenswrapper[4624]: I1008 15:55:30.078182 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7eb266fb63276346e90c3d262062f36a0e1fc23f0d47a193b62ecac589dab28d"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 15:55:30 crc kubenswrapper[4624]: I1008 15:55:30.078227 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://7eb266fb63276346e90c3d262062f36a0e1fc23f0d47a193b62ecac589dab28d" gracePeriod=600 Oct 08 15:55:30 crc kubenswrapper[4624]: I1008 15:55:30.976608 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="7eb266fb63276346e90c3d262062f36a0e1fc23f0d47a193b62ecac589dab28d" exitCode=0 Oct 08 15:55:30 crc kubenswrapper[4624]: I1008 15:55:30.977155 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"7eb266fb63276346e90c3d262062f36a0e1fc23f0d47a193b62ecac589dab28d"} Oct 08 15:55:30 crc kubenswrapper[4624]: I1008 15:55:30.977185 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8"} Oct 08 15:55:30 crc kubenswrapper[4624]: I1008 15:55:30.977202 4624 scope.go:117] "RemoveContainer" containerID="187befae1c4a01778a514d33e92616a5700a42aa66297997e0519c1207cabc3f" Oct 08 15:57:30 crc kubenswrapper[4624]: I1008 15:57:30.076621 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:57:30 crc kubenswrapper[4624]: I1008 15:57:30.077145 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:58:00 crc kubenswrapper[4624]: I1008 15:58:00.076447 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:58:00 crc kubenswrapper[4624]: I1008 15:58:00.077961 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:58:30 crc kubenswrapper[4624]: I1008 15:58:30.076650 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 15:58:30 crc kubenswrapper[4624]: I1008 15:58:30.077234 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 15:58:30 crc kubenswrapper[4624]: I1008 15:58:30.077281 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 15:58:30 crc kubenswrapper[4624]: I1008 15:58:30.078106 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 15:58:30 crc kubenswrapper[4624]: I1008 15:58:30.078173 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" gracePeriod=600 Oct 08 15:58:30 crc kubenswrapper[4624]: E1008 15:58:30.237351 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:58:30 crc kubenswrapper[4624]: I1008 15:58:30.605629 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" exitCode=0 Oct 08 15:58:30 crc kubenswrapper[4624]: I1008 15:58:30.605668 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8"} Oct 08 15:58:30 crc kubenswrapper[4624]: I1008 15:58:30.605726 4624 scope.go:117] "RemoveContainer" containerID="7eb266fb63276346e90c3d262062f36a0e1fc23f0d47a193b62ecac589dab28d" Oct 08 15:58:30 crc kubenswrapper[4624]: I1008 15:58:30.607953 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 15:58:30 crc kubenswrapper[4624]: E1008 15:58:30.608487 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.561611 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xnj5k"] Oct 08 15:58:42 crc kubenswrapper[4624]: E1008 15:58:42.563808 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be52dd92-2cda-49fd-ad64-bb5036470cfd" containerName="extract-content" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.563916 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="be52dd92-2cda-49fd-ad64-bb5036470cfd" containerName="extract-content" Oct 08 15:58:42 crc kubenswrapper[4624]: E1008 15:58:42.564050 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be52dd92-2cda-49fd-ad64-bb5036470cfd" containerName="registry-server" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.564130 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="be52dd92-2cda-49fd-ad64-bb5036470cfd" containerName="registry-server" Oct 08 15:58:42 crc kubenswrapper[4624]: E1008 15:58:42.564211 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2671b836-bfa6-4461-a53a-12f321ad98c0" containerName="registry-server" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.564284 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2671b836-bfa6-4461-a53a-12f321ad98c0" containerName="registry-server" Oct 08 15:58:42 crc kubenswrapper[4624]: E1008 15:58:42.564356 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2671b836-bfa6-4461-a53a-12f321ad98c0" containerName="extract-utilities" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.564433 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2671b836-bfa6-4461-a53a-12f321ad98c0" containerName="extract-utilities" Oct 08 15:58:42 crc kubenswrapper[4624]: E1008 15:58:42.564533 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be52dd92-2cda-49fd-ad64-bb5036470cfd" containerName="extract-utilities" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.564611 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="be52dd92-2cda-49fd-ad64-bb5036470cfd" containerName="extract-utilities" Oct 08 15:58:42 crc kubenswrapper[4624]: E1008 15:58:42.564724 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2671b836-bfa6-4461-a53a-12f321ad98c0" containerName="extract-content" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.564821 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2671b836-bfa6-4461-a53a-12f321ad98c0" containerName="extract-content" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.565178 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="2671b836-bfa6-4461-a53a-12f321ad98c0" containerName="registry-server" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.565281 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="be52dd92-2cda-49fd-ad64-bb5036470cfd" containerName="registry-server" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.567162 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnj5k" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.570328 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnj5k"] Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.675340 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xpt9\" (UniqueName: \"kubernetes.io/projected/bbedc795-916b-4618-a42e-50a4fd7e2ff8-kube-api-access-9xpt9\") pod \"redhat-marketplace-xnj5k\" (UID: \"bbedc795-916b-4618-a42e-50a4fd7e2ff8\") " pod="openshift-marketplace/redhat-marketplace-xnj5k" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.675795 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbedc795-916b-4618-a42e-50a4fd7e2ff8-utilities\") pod \"redhat-marketplace-xnj5k\" (UID: \"bbedc795-916b-4618-a42e-50a4fd7e2ff8\") " pod="openshift-marketplace/redhat-marketplace-xnj5k" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.675858 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbedc795-916b-4618-a42e-50a4fd7e2ff8-catalog-content\") pod \"redhat-marketplace-xnj5k\" (UID: \"bbedc795-916b-4618-a42e-50a4fd7e2ff8\") " pod="openshift-marketplace/redhat-marketplace-xnj5k" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.778540 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbedc795-916b-4618-a42e-50a4fd7e2ff8-utilities\") pod \"redhat-marketplace-xnj5k\" (UID: \"bbedc795-916b-4618-a42e-50a4fd7e2ff8\") " pod="openshift-marketplace/redhat-marketplace-xnj5k" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.778626 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbedc795-916b-4618-a42e-50a4fd7e2ff8-catalog-content\") pod \"redhat-marketplace-xnj5k\" (UID: \"bbedc795-916b-4618-a42e-50a4fd7e2ff8\") " pod="openshift-marketplace/redhat-marketplace-xnj5k" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.778891 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xpt9\" (UniqueName: \"kubernetes.io/projected/bbedc795-916b-4618-a42e-50a4fd7e2ff8-kube-api-access-9xpt9\") pod \"redhat-marketplace-xnj5k\" (UID: \"bbedc795-916b-4618-a42e-50a4fd7e2ff8\") " pod="openshift-marketplace/redhat-marketplace-xnj5k" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.779095 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbedc795-916b-4618-a42e-50a4fd7e2ff8-utilities\") pod \"redhat-marketplace-xnj5k\" (UID: \"bbedc795-916b-4618-a42e-50a4fd7e2ff8\") " pod="openshift-marketplace/redhat-marketplace-xnj5k" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.779515 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbedc795-916b-4618-a42e-50a4fd7e2ff8-catalog-content\") pod \"redhat-marketplace-xnj5k\" (UID: \"bbedc795-916b-4618-a42e-50a4fd7e2ff8\") " pod="openshift-marketplace/redhat-marketplace-xnj5k" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.815927 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xpt9\" (UniqueName: \"kubernetes.io/projected/bbedc795-916b-4618-a42e-50a4fd7e2ff8-kube-api-access-9xpt9\") pod \"redhat-marketplace-xnj5k\" (UID: \"bbedc795-916b-4618-a42e-50a4fd7e2ff8\") " pod="openshift-marketplace/redhat-marketplace-xnj5k" Oct 08 15:58:42 crc kubenswrapper[4624]: I1008 15:58:42.906376 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnj5k" Oct 08 15:58:43 crc kubenswrapper[4624]: I1008 15:58:43.435293 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnj5k"] Oct 08 15:58:43 crc kubenswrapper[4624]: I1008 15:58:43.734668 4624 generic.go:334] "Generic (PLEG): container finished" podID="bbedc795-916b-4618-a42e-50a4fd7e2ff8" containerID="143d9b43706abbc04ce39d652ef207952fd895cf61d06b93e8710e6b6ddb0ed0" exitCode=0 Oct 08 15:58:43 crc kubenswrapper[4624]: I1008 15:58:43.734721 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnj5k" event={"ID":"bbedc795-916b-4618-a42e-50a4fd7e2ff8","Type":"ContainerDied","Data":"143d9b43706abbc04ce39d652ef207952fd895cf61d06b93e8710e6b6ddb0ed0"} Oct 08 15:58:43 crc kubenswrapper[4624]: I1008 15:58:43.734753 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnj5k" event={"ID":"bbedc795-916b-4618-a42e-50a4fd7e2ff8","Type":"ContainerStarted","Data":"31292068bda4047203c3f18d710b80bdd2ffc625f030ecb296bea61722ccaf63"} Oct 08 15:58:43 crc kubenswrapper[4624]: I1008 15:58:43.738294 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 15:58:44 crc kubenswrapper[4624]: I1008 15:58:44.467786 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 15:58:44 crc kubenswrapper[4624]: E1008 15:58:44.468458 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:58:45 crc kubenswrapper[4624]: I1008 15:58:45.760073 4624 generic.go:334] "Generic (PLEG): container finished" podID="bbedc795-916b-4618-a42e-50a4fd7e2ff8" containerID="fff1d1bb6691faed19c4aad061fb068b372aacbe961678a004dda5545bc09185" exitCode=0 Oct 08 15:58:45 crc kubenswrapper[4624]: I1008 15:58:45.760192 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnj5k" event={"ID":"bbedc795-916b-4618-a42e-50a4fd7e2ff8","Type":"ContainerDied","Data":"fff1d1bb6691faed19c4aad061fb068b372aacbe961678a004dda5545bc09185"} Oct 08 15:58:46 crc kubenswrapper[4624]: I1008 15:58:46.771337 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnj5k" event={"ID":"bbedc795-916b-4618-a42e-50a4fd7e2ff8","Type":"ContainerStarted","Data":"f96d97a30d6fd1467c35ed3b25f4b87801cac4526ae9405ab54e21f2e6364df8"} Oct 08 15:58:46 crc kubenswrapper[4624]: I1008 15:58:46.794356 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xnj5k" podStartSLOduration=2.049844521 podStartE2EDuration="4.794335863s" podCreationTimestamp="2025-10-08 15:58:42 +0000 UTC" firstStartedPulling="2025-10-08 15:58:43.738003162 +0000 UTC m=+5748.888938239" lastFinishedPulling="2025-10-08 15:58:46.482494504 +0000 UTC m=+5751.633429581" observedRunningTime="2025-10-08 15:58:46.791202332 +0000 UTC m=+5751.942137419" watchObservedRunningTime="2025-10-08 15:58:46.794335863 +0000 UTC m=+5751.945270940" Oct 08 15:58:52 crc kubenswrapper[4624]: I1008 15:58:52.907308 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xnj5k" Oct 08 15:58:52 crc kubenswrapper[4624]: I1008 15:58:52.907833 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xnj5k" Oct 08 15:58:52 crc kubenswrapper[4624]: I1008 15:58:52.956881 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xnj5k" Oct 08 15:58:53 crc kubenswrapper[4624]: I1008 15:58:53.887807 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xnj5k" Oct 08 15:58:53 crc kubenswrapper[4624]: I1008 15:58:53.947808 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnj5k"] Oct 08 15:58:55 crc kubenswrapper[4624]: I1008 15:58:55.854235 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xnj5k" podUID="bbedc795-916b-4618-a42e-50a4fd7e2ff8" containerName="registry-server" containerID="cri-o://f96d97a30d6fd1467c35ed3b25f4b87801cac4526ae9405ab54e21f2e6364df8" gracePeriod=2 Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.361805 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnj5k" Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.478620 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbedc795-916b-4618-a42e-50a4fd7e2ff8-utilities\") pod \"bbedc795-916b-4618-a42e-50a4fd7e2ff8\" (UID: \"bbedc795-916b-4618-a42e-50a4fd7e2ff8\") " Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.478739 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbedc795-916b-4618-a42e-50a4fd7e2ff8-catalog-content\") pod \"bbedc795-916b-4618-a42e-50a4fd7e2ff8\" (UID: \"bbedc795-916b-4618-a42e-50a4fd7e2ff8\") " Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.478802 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xpt9\" (UniqueName: \"kubernetes.io/projected/bbedc795-916b-4618-a42e-50a4fd7e2ff8-kube-api-access-9xpt9\") pod \"bbedc795-916b-4618-a42e-50a4fd7e2ff8\" (UID: \"bbedc795-916b-4618-a42e-50a4fd7e2ff8\") " Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.479733 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbedc795-916b-4618-a42e-50a4fd7e2ff8-utilities" (OuterVolumeSpecName: "utilities") pod "bbedc795-916b-4618-a42e-50a4fd7e2ff8" (UID: "bbedc795-916b-4618-a42e-50a4fd7e2ff8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.488937 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbedc795-916b-4618-a42e-50a4fd7e2ff8-kube-api-access-9xpt9" (OuterVolumeSpecName: "kube-api-access-9xpt9") pod "bbedc795-916b-4618-a42e-50a4fd7e2ff8" (UID: "bbedc795-916b-4618-a42e-50a4fd7e2ff8"). InnerVolumeSpecName "kube-api-access-9xpt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.494887 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbedc795-916b-4618-a42e-50a4fd7e2ff8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbedc795-916b-4618-a42e-50a4fd7e2ff8" (UID: "bbedc795-916b-4618-a42e-50a4fd7e2ff8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.581977 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbedc795-916b-4618-a42e-50a4fd7e2ff8-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.582011 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbedc795-916b-4618-a42e-50a4fd7e2ff8-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.582024 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xpt9\" (UniqueName: \"kubernetes.io/projected/bbedc795-916b-4618-a42e-50a4fd7e2ff8-kube-api-access-9xpt9\") on node \"crc\" DevicePath \"\"" Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.865520 4624 generic.go:334] "Generic (PLEG): container finished" podID="bbedc795-916b-4618-a42e-50a4fd7e2ff8" containerID="f96d97a30d6fd1467c35ed3b25f4b87801cac4526ae9405ab54e21f2e6364df8" exitCode=0 Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.865567 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnj5k" event={"ID":"bbedc795-916b-4618-a42e-50a4fd7e2ff8","Type":"ContainerDied","Data":"f96d97a30d6fd1467c35ed3b25f4b87801cac4526ae9405ab54e21f2e6364df8"} Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.865606 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnj5k" event={"ID":"bbedc795-916b-4618-a42e-50a4fd7e2ff8","Type":"ContainerDied","Data":"31292068bda4047203c3f18d710b80bdd2ffc625f030ecb296bea61722ccaf63"} Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.865643 4624 scope.go:117] "RemoveContainer" containerID="f96d97a30d6fd1467c35ed3b25f4b87801cac4526ae9405ab54e21f2e6364df8" Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.865654 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnj5k" Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.898237 4624 scope.go:117] "RemoveContainer" containerID="fff1d1bb6691faed19c4aad061fb068b372aacbe961678a004dda5545bc09185" Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.903220 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnj5k"] Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.922364 4624 scope.go:117] "RemoveContainer" containerID="143d9b43706abbc04ce39d652ef207952fd895cf61d06b93e8710e6b6ddb0ed0" Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.927553 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnj5k"] Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.965397 4624 scope.go:117] "RemoveContainer" containerID="f96d97a30d6fd1467c35ed3b25f4b87801cac4526ae9405ab54e21f2e6364df8" Oct 08 15:58:56 crc kubenswrapper[4624]: E1008 15:58:56.965939 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f96d97a30d6fd1467c35ed3b25f4b87801cac4526ae9405ab54e21f2e6364df8\": container with ID starting with f96d97a30d6fd1467c35ed3b25f4b87801cac4526ae9405ab54e21f2e6364df8 not found: ID does not exist" containerID="f96d97a30d6fd1467c35ed3b25f4b87801cac4526ae9405ab54e21f2e6364df8" Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.965978 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f96d97a30d6fd1467c35ed3b25f4b87801cac4526ae9405ab54e21f2e6364df8"} err="failed to get container status \"f96d97a30d6fd1467c35ed3b25f4b87801cac4526ae9405ab54e21f2e6364df8\": rpc error: code = NotFound desc = could not find container \"f96d97a30d6fd1467c35ed3b25f4b87801cac4526ae9405ab54e21f2e6364df8\": container with ID starting with f96d97a30d6fd1467c35ed3b25f4b87801cac4526ae9405ab54e21f2e6364df8 not found: ID does not exist" Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.966004 4624 scope.go:117] "RemoveContainer" containerID="fff1d1bb6691faed19c4aad061fb068b372aacbe961678a004dda5545bc09185" Oct 08 15:58:56 crc kubenswrapper[4624]: E1008 15:58:56.966409 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fff1d1bb6691faed19c4aad061fb068b372aacbe961678a004dda5545bc09185\": container with ID starting with fff1d1bb6691faed19c4aad061fb068b372aacbe961678a004dda5545bc09185 not found: ID does not exist" containerID="fff1d1bb6691faed19c4aad061fb068b372aacbe961678a004dda5545bc09185" Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.966503 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fff1d1bb6691faed19c4aad061fb068b372aacbe961678a004dda5545bc09185"} err="failed to get container status \"fff1d1bb6691faed19c4aad061fb068b372aacbe961678a004dda5545bc09185\": rpc error: code = NotFound desc = could not find container \"fff1d1bb6691faed19c4aad061fb068b372aacbe961678a004dda5545bc09185\": container with ID starting with fff1d1bb6691faed19c4aad061fb068b372aacbe961678a004dda5545bc09185 not found: ID does not exist" Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.966574 4624 scope.go:117] "RemoveContainer" containerID="143d9b43706abbc04ce39d652ef207952fd895cf61d06b93e8710e6b6ddb0ed0" Oct 08 15:58:56 crc kubenswrapper[4624]: E1008 15:58:56.966940 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"143d9b43706abbc04ce39d652ef207952fd895cf61d06b93e8710e6b6ddb0ed0\": container with ID starting with 143d9b43706abbc04ce39d652ef207952fd895cf61d06b93e8710e6b6ddb0ed0 not found: ID does not exist" containerID="143d9b43706abbc04ce39d652ef207952fd895cf61d06b93e8710e6b6ddb0ed0" Oct 08 15:58:56 crc kubenswrapper[4624]: I1008 15:58:56.966972 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143d9b43706abbc04ce39d652ef207952fd895cf61d06b93e8710e6b6ddb0ed0"} err="failed to get container status \"143d9b43706abbc04ce39d652ef207952fd895cf61d06b93e8710e6b6ddb0ed0\": rpc error: code = NotFound desc = could not find container \"143d9b43706abbc04ce39d652ef207952fd895cf61d06b93e8710e6b6ddb0ed0\": container with ID starting with 143d9b43706abbc04ce39d652ef207952fd895cf61d06b93e8710e6b6ddb0ed0 not found: ID does not exist" Oct 08 15:58:57 crc kubenswrapper[4624]: I1008 15:58:57.465787 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 15:58:57 crc kubenswrapper[4624]: E1008 15:58:57.466163 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:58:57 crc kubenswrapper[4624]: I1008 15:58:57.477264 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbedc795-916b-4618-a42e-50a4fd7e2ff8" path="/var/lib/kubelet/pods/bbedc795-916b-4618-a42e-50a4fd7e2ff8/volumes" Oct 08 15:59:10 crc kubenswrapper[4624]: I1008 15:59:10.466339 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 15:59:10 crc kubenswrapper[4624]: E1008 15:59:10.467142 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:59:25 crc kubenswrapper[4624]: I1008 15:59:25.475538 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 15:59:25 crc kubenswrapper[4624]: E1008 15:59:25.476587 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:59:40 crc kubenswrapper[4624]: I1008 15:59:40.465653 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 15:59:40 crc kubenswrapper[4624]: E1008 15:59:40.466419 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 15:59:54 crc kubenswrapper[4624]: I1008 15:59:54.465881 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 15:59:54 crc kubenswrapper[4624]: E1008 15:59:54.466593 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.151814 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h"] Oct 08 16:00:00 crc kubenswrapper[4624]: E1008 16:00:00.153428 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbedc795-916b-4618-a42e-50a4fd7e2ff8" containerName="extract-content" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.153450 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbedc795-916b-4618-a42e-50a4fd7e2ff8" containerName="extract-content" Oct 08 16:00:00 crc kubenswrapper[4624]: E1008 16:00:00.153465 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbedc795-916b-4618-a42e-50a4fd7e2ff8" containerName="extract-utilities" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.153472 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbedc795-916b-4618-a42e-50a4fd7e2ff8" containerName="extract-utilities" Oct 08 16:00:00 crc kubenswrapper[4624]: E1008 16:00:00.153495 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbedc795-916b-4618-a42e-50a4fd7e2ff8" containerName="registry-server" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.153501 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbedc795-916b-4618-a42e-50a4fd7e2ff8" containerName="registry-server" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.153794 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbedc795-916b-4618-a42e-50a4fd7e2ff8" containerName="registry-server" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.154924 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.158405 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.158486 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.163800 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h"] Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.319583 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5295ef73-9c6e-416b-9961-90699752fad3-secret-volume\") pod \"collect-profiles-29332320-zq75h\" (UID: \"5295ef73-9c6e-416b-9961-90699752fad3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.319778 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5t72\" (UniqueName: \"kubernetes.io/projected/5295ef73-9c6e-416b-9961-90699752fad3-kube-api-access-z5t72\") pod \"collect-profiles-29332320-zq75h\" (UID: \"5295ef73-9c6e-416b-9961-90699752fad3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.319825 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5295ef73-9c6e-416b-9961-90699752fad3-config-volume\") pod \"collect-profiles-29332320-zq75h\" (UID: \"5295ef73-9c6e-416b-9961-90699752fad3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.421423 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5t72\" (UniqueName: \"kubernetes.io/projected/5295ef73-9c6e-416b-9961-90699752fad3-kube-api-access-z5t72\") pod \"collect-profiles-29332320-zq75h\" (UID: \"5295ef73-9c6e-416b-9961-90699752fad3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.421505 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5295ef73-9c6e-416b-9961-90699752fad3-config-volume\") pod \"collect-profiles-29332320-zq75h\" (UID: \"5295ef73-9c6e-416b-9961-90699752fad3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.421611 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5295ef73-9c6e-416b-9961-90699752fad3-secret-volume\") pod \"collect-profiles-29332320-zq75h\" (UID: \"5295ef73-9c6e-416b-9961-90699752fad3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.424229 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5295ef73-9c6e-416b-9961-90699752fad3-config-volume\") pod \"collect-profiles-29332320-zq75h\" (UID: \"5295ef73-9c6e-416b-9961-90699752fad3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.431793 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5295ef73-9c6e-416b-9961-90699752fad3-secret-volume\") pod \"collect-profiles-29332320-zq75h\" (UID: \"5295ef73-9c6e-416b-9961-90699752fad3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.445919 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5t72\" (UniqueName: \"kubernetes.io/projected/5295ef73-9c6e-416b-9961-90699752fad3-kube-api-access-z5t72\") pod \"collect-profiles-29332320-zq75h\" (UID: \"5295ef73-9c6e-416b-9961-90699752fad3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.493713 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h" Oct 08 16:00:00 crc kubenswrapper[4624]: I1008 16:00:00.965130 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h"] Oct 08 16:00:01 crc kubenswrapper[4624]: I1008 16:00:01.426306 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h" event={"ID":"5295ef73-9c6e-416b-9961-90699752fad3","Type":"ContainerStarted","Data":"760fe869d9feea8d1fb3a232fe6b2ba218567d8c22d7b13237826224970a04f2"} Oct 08 16:00:01 crc kubenswrapper[4624]: I1008 16:00:01.426356 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h" event={"ID":"5295ef73-9c6e-416b-9961-90699752fad3","Type":"ContainerStarted","Data":"8d32f6dc6202027ee4c19f096d77351e3a178530bece08485faaf29fe660a156"} Oct 08 16:00:01 crc kubenswrapper[4624]: I1008 16:00:01.461297 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h" podStartSLOduration=1.461273277 podStartE2EDuration="1.461273277s" podCreationTimestamp="2025-10-08 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 16:00:01.454536233 +0000 UTC m=+5826.605471310" watchObservedRunningTime="2025-10-08 16:00:01.461273277 +0000 UTC m=+5826.612208374" Oct 08 16:00:02 crc kubenswrapper[4624]: I1008 16:00:02.436244 4624 generic.go:334] "Generic (PLEG): container finished" podID="5295ef73-9c6e-416b-9961-90699752fad3" containerID="760fe869d9feea8d1fb3a232fe6b2ba218567d8c22d7b13237826224970a04f2" exitCode=0 Oct 08 16:00:02 crc kubenswrapper[4624]: I1008 16:00:02.436296 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h" event={"ID":"5295ef73-9c6e-416b-9961-90699752fad3","Type":"ContainerDied","Data":"760fe869d9feea8d1fb3a232fe6b2ba218567d8c22d7b13237826224970a04f2"} Oct 08 16:00:03 crc kubenswrapper[4624]: I1008 16:00:03.861232 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h" Oct 08 16:00:03 crc kubenswrapper[4624]: I1008 16:00:03.998009 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5t72\" (UniqueName: \"kubernetes.io/projected/5295ef73-9c6e-416b-9961-90699752fad3-kube-api-access-z5t72\") pod \"5295ef73-9c6e-416b-9961-90699752fad3\" (UID: \"5295ef73-9c6e-416b-9961-90699752fad3\") " Oct 08 16:00:03 crc kubenswrapper[4624]: I1008 16:00:03.998295 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5295ef73-9c6e-416b-9961-90699752fad3-secret-volume\") pod \"5295ef73-9c6e-416b-9961-90699752fad3\" (UID: \"5295ef73-9c6e-416b-9961-90699752fad3\") " Oct 08 16:00:03 crc kubenswrapper[4624]: I1008 16:00:03.998606 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5295ef73-9c6e-416b-9961-90699752fad3-config-volume\") pod \"5295ef73-9c6e-416b-9961-90699752fad3\" (UID: \"5295ef73-9c6e-416b-9961-90699752fad3\") " Oct 08 16:00:04 crc kubenswrapper[4624]: I1008 16:00:04.000370 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5295ef73-9c6e-416b-9961-90699752fad3-config-volume" (OuterVolumeSpecName: "config-volume") pod "5295ef73-9c6e-416b-9961-90699752fad3" (UID: "5295ef73-9c6e-416b-9961-90699752fad3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 16:00:04 crc kubenswrapper[4624]: I1008 16:00:04.008000 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5295ef73-9c6e-416b-9961-90699752fad3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5295ef73-9c6e-416b-9961-90699752fad3" (UID: "5295ef73-9c6e-416b-9961-90699752fad3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:00:04 crc kubenswrapper[4624]: I1008 16:00:04.008068 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5295ef73-9c6e-416b-9961-90699752fad3-kube-api-access-z5t72" (OuterVolumeSpecName: "kube-api-access-z5t72") pod "5295ef73-9c6e-416b-9961-90699752fad3" (UID: "5295ef73-9c6e-416b-9961-90699752fad3"). InnerVolumeSpecName "kube-api-access-z5t72". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:00:04 crc kubenswrapper[4624]: I1008 16:00:04.102102 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5t72\" (UniqueName: \"kubernetes.io/projected/5295ef73-9c6e-416b-9961-90699752fad3-kube-api-access-z5t72\") on node \"crc\" DevicePath \"\"" Oct 08 16:00:04 crc kubenswrapper[4624]: I1008 16:00:04.102146 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5295ef73-9c6e-416b-9961-90699752fad3-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 16:00:04 crc kubenswrapper[4624]: I1008 16:00:04.102161 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5295ef73-9c6e-416b-9961-90699752fad3-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 16:00:04 crc kubenswrapper[4624]: I1008 16:00:04.459840 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h" event={"ID":"5295ef73-9c6e-416b-9961-90699752fad3","Type":"ContainerDied","Data":"8d32f6dc6202027ee4c19f096d77351e3a178530bece08485faaf29fe660a156"} Oct 08 16:00:04 crc kubenswrapper[4624]: I1008 16:00:04.459895 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d32f6dc6202027ee4c19f096d77351e3a178530bece08485faaf29fe660a156" Oct 08 16:00:04 crc kubenswrapper[4624]: I1008 16:00:04.459940 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h" Oct 08 16:00:04 crc kubenswrapper[4624]: I1008 16:00:04.954507 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc"] Oct 08 16:00:04 crc kubenswrapper[4624]: I1008 16:00:04.962186 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332275-ckwsc"] Oct 08 16:00:05 crc kubenswrapper[4624]: I1008 16:00:05.493522 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="413d973e-42e8-4d9f-b5ca-1d093047abfa" path="/var/lib/kubelet/pods/413d973e-42e8-4d9f-b5ca-1d093047abfa/volumes" Oct 08 16:00:08 crc kubenswrapper[4624]: I1008 16:00:08.466867 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 16:00:08 crc kubenswrapper[4624]: E1008 16:00:08.467707 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:00:23 crc kubenswrapper[4624]: I1008 16:00:23.466245 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 16:00:23 crc kubenswrapper[4624]: E1008 16:00:23.467060 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:00:37 crc kubenswrapper[4624]: I1008 16:00:37.467142 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 16:00:37 crc kubenswrapper[4624]: E1008 16:00:37.467929 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:00:45 crc kubenswrapper[4624]: I1008 16:00:45.302413 4624 scope.go:117] "RemoveContainer" containerID="8f80139a941747d8b436ea10cc9892d5165eeae29cfed193df1e8d7686699546" Oct 08 16:00:49 crc kubenswrapper[4624]: I1008 16:00:49.465527 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 16:00:49 crc kubenswrapper[4624]: E1008 16:00:49.466468 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:01:00 crc kubenswrapper[4624]: I1008 16:01:00.174335 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29332321-7k55z"] Oct 08 16:01:00 crc kubenswrapper[4624]: E1008 16:01:00.175353 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5295ef73-9c6e-416b-9961-90699752fad3" containerName="collect-profiles" Oct 08 16:01:00 crc kubenswrapper[4624]: I1008 16:01:00.175373 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="5295ef73-9c6e-416b-9961-90699752fad3" containerName="collect-profiles" Oct 08 16:01:00 crc kubenswrapper[4624]: I1008 16:01:00.175610 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="5295ef73-9c6e-416b-9961-90699752fad3" containerName="collect-profiles" Oct 08 16:01:00 crc kubenswrapper[4624]: I1008 16:01:00.176283 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29332321-7k55z" Oct 08 16:01:00 crc kubenswrapper[4624]: I1008 16:01:00.202332 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29332321-7k55z"] Oct 08 16:01:00 crc kubenswrapper[4624]: I1008 16:01:00.229331 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826b0aa9-5c21-4e34-ac07-66eb07b77464-combined-ca-bundle\") pod \"keystone-cron-29332321-7k55z\" (UID: \"826b0aa9-5c21-4e34-ac07-66eb07b77464\") " pod="openstack/keystone-cron-29332321-7k55z" Oct 08 16:01:00 crc kubenswrapper[4624]: I1008 16:01:00.229440 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqc52\" (UniqueName: \"kubernetes.io/projected/826b0aa9-5c21-4e34-ac07-66eb07b77464-kube-api-access-jqc52\") pod \"keystone-cron-29332321-7k55z\" (UID: \"826b0aa9-5c21-4e34-ac07-66eb07b77464\") " pod="openstack/keystone-cron-29332321-7k55z" Oct 08 16:01:00 crc kubenswrapper[4624]: I1008 16:01:00.229496 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/826b0aa9-5c21-4e34-ac07-66eb07b77464-fernet-keys\") pod \"keystone-cron-29332321-7k55z\" (UID: \"826b0aa9-5c21-4e34-ac07-66eb07b77464\") " pod="openstack/keystone-cron-29332321-7k55z" Oct 08 16:01:00 crc kubenswrapper[4624]: I1008 16:01:00.229597 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826b0aa9-5c21-4e34-ac07-66eb07b77464-config-data\") pod \"keystone-cron-29332321-7k55z\" (UID: \"826b0aa9-5c21-4e34-ac07-66eb07b77464\") " pod="openstack/keystone-cron-29332321-7k55z" Oct 08 16:01:00 crc kubenswrapper[4624]: I1008 16:01:00.331370 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqc52\" (UniqueName: \"kubernetes.io/projected/826b0aa9-5c21-4e34-ac07-66eb07b77464-kube-api-access-jqc52\") pod \"keystone-cron-29332321-7k55z\" (UID: \"826b0aa9-5c21-4e34-ac07-66eb07b77464\") " pod="openstack/keystone-cron-29332321-7k55z" Oct 08 16:01:00 crc kubenswrapper[4624]: I1008 16:01:00.331449 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/826b0aa9-5c21-4e34-ac07-66eb07b77464-fernet-keys\") pod \"keystone-cron-29332321-7k55z\" (UID: \"826b0aa9-5c21-4e34-ac07-66eb07b77464\") " pod="openstack/keystone-cron-29332321-7k55z" Oct 08 16:01:00 crc kubenswrapper[4624]: I1008 16:01:00.331504 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826b0aa9-5c21-4e34-ac07-66eb07b77464-config-data\") pod \"keystone-cron-29332321-7k55z\" (UID: \"826b0aa9-5c21-4e34-ac07-66eb07b77464\") " pod="openstack/keystone-cron-29332321-7k55z" Oct 08 16:01:00 crc kubenswrapper[4624]: I1008 16:01:00.331570 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826b0aa9-5c21-4e34-ac07-66eb07b77464-combined-ca-bundle\") pod \"keystone-cron-29332321-7k55z\" (UID: \"826b0aa9-5c21-4e34-ac07-66eb07b77464\") " pod="openstack/keystone-cron-29332321-7k55z" Oct 08 16:01:00 crc kubenswrapper[4624]: I1008 16:01:00.338145 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/826b0aa9-5c21-4e34-ac07-66eb07b77464-fernet-keys\") pod \"keystone-cron-29332321-7k55z\" (UID: \"826b0aa9-5c21-4e34-ac07-66eb07b77464\") " pod="openstack/keystone-cron-29332321-7k55z" Oct 08 16:01:00 crc kubenswrapper[4624]: I1008 16:01:00.341802 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826b0aa9-5c21-4e34-ac07-66eb07b77464-config-data\") pod \"keystone-cron-29332321-7k55z\" (UID: \"826b0aa9-5c21-4e34-ac07-66eb07b77464\") " pod="openstack/keystone-cron-29332321-7k55z" Oct 08 16:01:00 crc kubenswrapper[4624]: I1008 16:01:00.344343 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826b0aa9-5c21-4e34-ac07-66eb07b77464-combined-ca-bundle\") pod \"keystone-cron-29332321-7k55z\" (UID: \"826b0aa9-5c21-4e34-ac07-66eb07b77464\") " pod="openstack/keystone-cron-29332321-7k55z" Oct 08 16:01:00 crc kubenswrapper[4624]: I1008 16:01:00.350486 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqc52\" (UniqueName: \"kubernetes.io/projected/826b0aa9-5c21-4e34-ac07-66eb07b77464-kube-api-access-jqc52\") pod \"keystone-cron-29332321-7k55z\" (UID: \"826b0aa9-5c21-4e34-ac07-66eb07b77464\") " pod="openstack/keystone-cron-29332321-7k55z" Oct 08 16:01:00 crc kubenswrapper[4624]: I1008 16:01:00.506072 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29332321-7k55z" Oct 08 16:01:01 crc kubenswrapper[4624]: I1008 16:01:01.000279 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29332321-7k55z"] Oct 08 16:01:02 crc kubenswrapper[4624]: I1008 16:01:02.027615 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29332321-7k55z" event={"ID":"826b0aa9-5c21-4e34-ac07-66eb07b77464","Type":"ContainerStarted","Data":"64393dc3c82f13179c6527291c4ba3199b8d95211d28ab67b019d111dc383b52"} Oct 08 16:01:02 crc kubenswrapper[4624]: I1008 16:01:02.028144 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29332321-7k55z" event={"ID":"826b0aa9-5c21-4e34-ac07-66eb07b77464","Type":"ContainerStarted","Data":"85e6d9fbd9ee1d1adb17c0da86f08c0e80d627c38474f84aba50f11154948fff"} Oct 08 16:01:02 crc kubenswrapper[4624]: I1008 16:01:02.056603 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29332321-7k55z" podStartSLOduration=2.056567555 podStartE2EDuration="2.056567555s" podCreationTimestamp="2025-10-08 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 16:01:02.043317195 +0000 UTC m=+5887.194252292" watchObservedRunningTime="2025-10-08 16:01:02.056567555 +0000 UTC m=+5887.207502632" Oct 08 16:01:04 crc kubenswrapper[4624]: I1008 16:01:04.466614 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 16:01:04 crc kubenswrapper[4624]: E1008 16:01:04.467404 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:01:08 crc kubenswrapper[4624]: I1008 16:01:08.084485 4624 generic.go:334] "Generic (PLEG): container finished" podID="826b0aa9-5c21-4e34-ac07-66eb07b77464" containerID="64393dc3c82f13179c6527291c4ba3199b8d95211d28ab67b019d111dc383b52" exitCode=0 Oct 08 16:01:08 crc kubenswrapper[4624]: I1008 16:01:08.084574 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29332321-7k55z" event={"ID":"826b0aa9-5c21-4e34-ac07-66eb07b77464","Type":"ContainerDied","Data":"64393dc3c82f13179c6527291c4ba3199b8d95211d28ab67b019d111dc383b52"} Oct 08 16:01:09 crc kubenswrapper[4624]: I1008 16:01:09.544457 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29332321-7k55z" Oct 08 16:01:09 crc kubenswrapper[4624]: I1008 16:01:09.638569 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqc52\" (UniqueName: \"kubernetes.io/projected/826b0aa9-5c21-4e34-ac07-66eb07b77464-kube-api-access-jqc52\") pod \"826b0aa9-5c21-4e34-ac07-66eb07b77464\" (UID: \"826b0aa9-5c21-4e34-ac07-66eb07b77464\") " Oct 08 16:01:09 crc kubenswrapper[4624]: I1008 16:01:09.638715 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826b0aa9-5c21-4e34-ac07-66eb07b77464-config-data\") pod \"826b0aa9-5c21-4e34-ac07-66eb07b77464\" (UID: \"826b0aa9-5c21-4e34-ac07-66eb07b77464\") " Oct 08 16:01:09 crc kubenswrapper[4624]: I1008 16:01:09.638772 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826b0aa9-5c21-4e34-ac07-66eb07b77464-combined-ca-bundle\") pod \"826b0aa9-5c21-4e34-ac07-66eb07b77464\" (UID: \"826b0aa9-5c21-4e34-ac07-66eb07b77464\") " Oct 08 16:01:09 crc kubenswrapper[4624]: I1008 16:01:09.638793 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/826b0aa9-5c21-4e34-ac07-66eb07b77464-fernet-keys\") pod \"826b0aa9-5c21-4e34-ac07-66eb07b77464\" (UID: \"826b0aa9-5c21-4e34-ac07-66eb07b77464\") " Oct 08 16:01:09 crc kubenswrapper[4624]: I1008 16:01:09.646532 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826b0aa9-5c21-4e34-ac07-66eb07b77464-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "826b0aa9-5c21-4e34-ac07-66eb07b77464" (UID: "826b0aa9-5c21-4e34-ac07-66eb07b77464"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:01:09 crc kubenswrapper[4624]: I1008 16:01:09.648851 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826b0aa9-5c21-4e34-ac07-66eb07b77464-kube-api-access-jqc52" (OuterVolumeSpecName: "kube-api-access-jqc52") pod "826b0aa9-5c21-4e34-ac07-66eb07b77464" (UID: "826b0aa9-5c21-4e34-ac07-66eb07b77464"). InnerVolumeSpecName "kube-api-access-jqc52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:01:09 crc kubenswrapper[4624]: I1008 16:01:09.672398 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826b0aa9-5c21-4e34-ac07-66eb07b77464-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "826b0aa9-5c21-4e34-ac07-66eb07b77464" (UID: "826b0aa9-5c21-4e34-ac07-66eb07b77464"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:01:09 crc kubenswrapper[4624]: I1008 16:01:09.705659 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826b0aa9-5c21-4e34-ac07-66eb07b77464-config-data" (OuterVolumeSpecName: "config-data") pod "826b0aa9-5c21-4e34-ac07-66eb07b77464" (UID: "826b0aa9-5c21-4e34-ac07-66eb07b77464"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:01:09 crc kubenswrapper[4624]: I1008 16:01:09.743197 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqc52\" (UniqueName: \"kubernetes.io/projected/826b0aa9-5c21-4e34-ac07-66eb07b77464-kube-api-access-jqc52\") on node \"crc\" DevicePath \"\"" Oct 08 16:01:09 crc kubenswrapper[4624]: I1008 16:01:09.743695 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826b0aa9-5c21-4e34-ac07-66eb07b77464-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 16:01:09 crc kubenswrapper[4624]: I1008 16:01:09.743782 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826b0aa9-5c21-4e34-ac07-66eb07b77464-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 16:01:09 crc kubenswrapper[4624]: I1008 16:01:09.743842 4624 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/826b0aa9-5c21-4e34-ac07-66eb07b77464-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 08 16:01:10 crc kubenswrapper[4624]: I1008 16:01:10.103733 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29332321-7k55z" event={"ID":"826b0aa9-5c21-4e34-ac07-66eb07b77464","Type":"ContainerDied","Data":"85e6d9fbd9ee1d1adb17c0da86f08c0e80d627c38474f84aba50f11154948fff"} Oct 08 16:01:10 crc kubenswrapper[4624]: I1008 16:01:10.104028 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85e6d9fbd9ee1d1adb17c0da86f08c0e80d627c38474f84aba50f11154948fff" Oct 08 16:01:10 crc kubenswrapper[4624]: I1008 16:01:10.104111 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29332321-7k55z" Oct 08 16:01:15 crc kubenswrapper[4624]: I1008 16:01:15.474330 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 16:01:15 crc kubenswrapper[4624]: E1008 16:01:15.475445 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:01:26 crc kubenswrapper[4624]: I1008 16:01:26.465683 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 16:01:26 crc kubenswrapper[4624]: E1008 16:01:26.466559 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:01:28 crc kubenswrapper[4624]: I1008 16:01:28.267347 4624 generic.go:334] "Generic (PLEG): container finished" podID="391ff9a0-631c-4520-a9f9-80fda37e32a1" containerID="e048aa89321a6e21c35dc2130d6e111e87a36e0b755e45d8501fe740b925ad39" exitCode=0 Oct 08 16:01:28 crc kubenswrapper[4624]: I1008 16:01:28.267419 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"391ff9a0-631c-4520-a9f9-80fda37e32a1","Type":"ContainerDied","Data":"e048aa89321a6e21c35dc2130d6e111e87a36e0b755e45d8501fe740b925ad39"} Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.589302 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.708269 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/391ff9a0-631c-4520-a9f9-80fda37e32a1-openstack-config-secret\") pod \"391ff9a0-631c-4520-a9f9-80fda37e32a1\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.708788 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"391ff9a0-631c-4520-a9f9-80fda37e32a1\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.708820 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xmm6\" (UniqueName: \"kubernetes.io/projected/391ff9a0-631c-4520-a9f9-80fda37e32a1-kube-api-access-7xmm6\") pod \"391ff9a0-631c-4520-a9f9-80fda37e32a1\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.708942 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/391ff9a0-631c-4520-a9f9-80fda37e32a1-config-data\") pod \"391ff9a0-631c-4520-a9f9-80fda37e32a1\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.709713 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/391ff9a0-631c-4520-a9f9-80fda37e32a1-test-operator-ephemeral-workdir\") pod \"391ff9a0-631c-4520-a9f9-80fda37e32a1\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.709740 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/391ff9a0-631c-4520-a9f9-80fda37e32a1-openstack-config\") pod \"391ff9a0-631c-4520-a9f9-80fda37e32a1\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.709836 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/391ff9a0-631c-4520-a9f9-80fda37e32a1-ssh-key\") pod \"391ff9a0-631c-4520-a9f9-80fda37e32a1\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.709930 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/391ff9a0-631c-4520-a9f9-80fda37e32a1-test-operator-ephemeral-temporary\") pod \"391ff9a0-631c-4520-a9f9-80fda37e32a1\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.709975 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/391ff9a0-631c-4520-a9f9-80fda37e32a1-ca-certs\") pod \"391ff9a0-631c-4520-a9f9-80fda37e32a1\" (UID: \"391ff9a0-631c-4520-a9f9-80fda37e32a1\") " Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.723929 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/391ff9a0-631c-4520-a9f9-80fda37e32a1-config-data" (OuterVolumeSpecName: "config-data") pod "391ff9a0-631c-4520-a9f9-80fda37e32a1" (UID: "391ff9a0-631c-4520-a9f9-80fda37e32a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.731855 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/391ff9a0-631c-4520-a9f9-80fda37e32a1-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "391ff9a0-631c-4520-a9f9-80fda37e32a1" (UID: "391ff9a0-631c-4520-a9f9-80fda37e32a1"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.737684 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/391ff9a0-631c-4520-a9f9-80fda37e32a1-kube-api-access-7xmm6" (OuterVolumeSpecName: "kube-api-access-7xmm6") pod "391ff9a0-631c-4520-a9f9-80fda37e32a1" (UID: "391ff9a0-631c-4520-a9f9-80fda37e32a1"). InnerVolumeSpecName "kube-api-access-7xmm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.738815 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "391ff9a0-631c-4520-a9f9-80fda37e32a1" (UID: "391ff9a0-631c-4520-a9f9-80fda37e32a1"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.770762 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/391ff9a0-631c-4520-a9f9-80fda37e32a1-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "391ff9a0-631c-4520-a9f9-80fda37e32a1" (UID: "391ff9a0-631c-4520-a9f9-80fda37e32a1"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.795695 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/391ff9a0-631c-4520-a9f9-80fda37e32a1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "391ff9a0-631c-4520-a9f9-80fda37e32a1" (UID: "391ff9a0-631c-4520-a9f9-80fda37e32a1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.801620 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/391ff9a0-631c-4520-a9f9-80fda37e32a1-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "391ff9a0-631c-4520-a9f9-80fda37e32a1" (UID: "391ff9a0-631c-4520-a9f9-80fda37e32a1"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.812518 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/391ff9a0-631c-4520-a9f9-80fda37e32a1-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.812551 4624 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/391ff9a0-631c-4520-a9f9-80fda37e32a1-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.812564 4624 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/391ff9a0-631c-4520-a9f9-80fda37e32a1-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.812573 4624 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/391ff9a0-631c-4520-a9f9-80fda37e32a1-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.813431 4624 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.813456 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xmm6\" (UniqueName: \"kubernetes.io/projected/391ff9a0-631c-4520-a9f9-80fda37e32a1-kube-api-access-7xmm6\") on node \"crc\" DevicePath \"\"" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.813467 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/391ff9a0-631c-4520-a9f9-80fda37e32a1-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.821899 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/391ff9a0-631c-4520-a9f9-80fda37e32a1-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "391ff9a0-631c-4520-a9f9-80fda37e32a1" (UID: "391ff9a0-631c-4520-a9f9-80fda37e32a1"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.837734 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/391ff9a0-631c-4520-a9f9-80fda37e32a1-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "391ff9a0-631c-4520-a9f9-80fda37e32a1" (UID: "391ff9a0-631c-4520-a9f9-80fda37e32a1"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.845423 4624 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.867695 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Oct 08 16:01:30 crc kubenswrapper[4624]: E1008 16:01:30.868269 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="391ff9a0-631c-4520-a9f9-80fda37e32a1" containerName="tempest-tests-tempest-tests-runner" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.868293 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="391ff9a0-631c-4520-a9f9-80fda37e32a1" containerName="tempest-tests-tempest-tests-runner" Oct 08 16:01:30 crc kubenswrapper[4624]: E1008 16:01:30.868313 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826b0aa9-5c21-4e34-ac07-66eb07b77464" containerName="keystone-cron" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.875671 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="826b0aa9-5c21-4e34-ac07-66eb07b77464" containerName="keystone-cron" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.876205 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="391ff9a0-631c-4520-a9f9-80fda37e32a1" containerName="tempest-tests-tempest-tests-runner" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.876237 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="826b0aa9-5c21-4e34-ac07-66eb07b77464" containerName="keystone-cron" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.878728 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.882068 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.882286 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.883681 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.918671 4624 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/391ff9a0-631c-4520-a9f9-80fda37e32a1-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.918709 4624 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/391ff9a0-631c-4520-a9f9-80fda37e32a1-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 08 16:01:30 crc kubenswrapper[4624]: I1008 16:01:30.918724 4624 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.020832 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.021377 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.021502 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb994804-3cd4-4414-912f-a01613418132-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.021676 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.021787 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.021903 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb994804-3cd4-4414-912f-a01613418132-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.021995 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fb994804-3cd4-4414-912f-a01613418132-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.022062 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nddjc\" (UniqueName: \"kubernetes.io/projected/fb994804-3cd4-4414-912f-a01613418132-kube-api-access-nddjc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.022243 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fb994804-3cd4-4414-912f-a01613418132-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.123627 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fb994804-3cd4-4414-912f-a01613418132-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.124060 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.124220 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.124364 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb994804-3cd4-4414-912f-a01613418132-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.124113 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fb994804-3cd4-4414-912f-a01613418132-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.124558 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.124692 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.124854 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb994804-3cd4-4414-912f-a01613418132-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.125026 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fb994804-3cd4-4414-912f-a01613418132-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.125130 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddjc\" (UniqueName: \"kubernetes.io/projected/fb994804-3cd4-4414-912f-a01613418132-kube-api-access-nddjc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.125406 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fb994804-3cd4-4414-912f-a01613418132-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.125679 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb994804-3cd4-4414-912f-a01613418132-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.125881 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.126694 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb994804-3cd4-4414-912f-a01613418132-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.131818 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.133354 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.134131 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.143430 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nddjc\" (UniqueName: \"kubernetes.io/projected/fb994804-3cd4-4414-912f-a01613418132-kube-api-access-nddjc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.156264 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.232708 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.303526 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"391ff9a0-631c-4520-a9f9-80fda37e32a1","Type":"ContainerDied","Data":"f75e0444f74b3a6065a12886f962d9f85a45cc9e9f858e773a142837b2798721"} Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.303885 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f75e0444f74b3a6065a12886f962d9f85a45cc9e9f858e773a142837b2798721" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.303847 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Oct 08 16:01:31 crc kubenswrapper[4624]: I1008 16:01:31.789485 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Oct 08 16:01:32 crc kubenswrapper[4624]: I1008 16:01:32.316828 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"fb994804-3cd4-4414-912f-a01613418132","Type":"ContainerStarted","Data":"c18696c6e69001e8e22282075e5f5251caa264ea17a0b93fe742b339690e5067"} Oct 08 16:01:38 crc kubenswrapper[4624]: I1008 16:01:38.466434 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 16:01:38 crc kubenswrapper[4624]: E1008 16:01:38.467859 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:01:39 crc kubenswrapper[4624]: I1008 16:01:39.380984 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"fb994804-3cd4-4414-912f-a01613418132","Type":"ContainerStarted","Data":"821d5abcd4d9a184f0e4ff620434f8a4fae4f0dec44f78753f0464c037c60fbd"} Oct 08 16:01:39 crc kubenswrapper[4624]: I1008 16:01:39.407217 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" podStartSLOduration=9.40719666 podStartE2EDuration="9.40719666s" podCreationTimestamp="2025-10-08 16:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 16:01:39.397552232 +0000 UTC m=+5924.548487309" watchObservedRunningTime="2025-10-08 16:01:39.40719666 +0000 UTC m=+5924.558131737" Oct 08 16:01:44 crc kubenswrapper[4624]: I1008 16:01:44.298422 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7gb88"] Oct 08 16:01:44 crc kubenswrapper[4624]: I1008 16:01:44.300865 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gb88" Oct 08 16:01:44 crc kubenswrapper[4624]: I1008 16:01:44.308021 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7gb88"] Oct 08 16:01:44 crc kubenswrapper[4624]: I1008 16:01:44.430391 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1eb7d25-b6cd-421c-bcfd-512714920e9c-utilities\") pod \"community-operators-7gb88\" (UID: \"d1eb7d25-b6cd-421c-bcfd-512714920e9c\") " pod="openshift-marketplace/community-operators-7gb88" Oct 08 16:01:44 crc kubenswrapper[4624]: I1008 16:01:44.430441 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1eb7d25-b6cd-421c-bcfd-512714920e9c-catalog-content\") pod \"community-operators-7gb88\" (UID: \"d1eb7d25-b6cd-421c-bcfd-512714920e9c\") " pod="openshift-marketplace/community-operators-7gb88" Oct 08 16:01:44 crc kubenswrapper[4624]: I1008 16:01:44.430502 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zq4g\" (UniqueName: \"kubernetes.io/projected/d1eb7d25-b6cd-421c-bcfd-512714920e9c-kube-api-access-8zq4g\") pod \"community-operators-7gb88\" (UID: \"d1eb7d25-b6cd-421c-bcfd-512714920e9c\") " pod="openshift-marketplace/community-operators-7gb88" Oct 08 16:01:44 crc kubenswrapper[4624]: I1008 16:01:44.532087 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1eb7d25-b6cd-421c-bcfd-512714920e9c-utilities\") pod \"community-operators-7gb88\" (UID: \"d1eb7d25-b6cd-421c-bcfd-512714920e9c\") " pod="openshift-marketplace/community-operators-7gb88" Oct 08 16:01:44 crc kubenswrapper[4624]: I1008 16:01:44.532141 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1eb7d25-b6cd-421c-bcfd-512714920e9c-catalog-content\") pod \"community-operators-7gb88\" (UID: \"d1eb7d25-b6cd-421c-bcfd-512714920e9c\") " pod="openshift-marketplace/community-operators-7gb88" Oct 08 16:01:44 crc kubenswrapper[4624]: I1008 16:01:44.532655 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1eb7d25-b6cd-421c-bcfd-512714920e9c-catalog-content\") pod \"community-operators-7gb88\" (UID: \"d1eb7d25-b6cd-421c-bcfd-512714920e9c\") " pod="openshift-marketplace/community-operators-7gb88" Oct 08 16:01:44 crc kubenswrapper[4624]: I1008 16:01:44.532698 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zq4g\" (UniqueName: \"kubernetes.io/projected/d1eb7d25-b6cd-421c-bcfd-512714920e9c-kube-api-access-8zq4g\") pod \"community-operators-7gb88\" (UID: \"d1eb7d25-b6cd-421c-bcfd-512714920e9c\") " pod="openshift-marketplace/community-operators-7gb88" Oct 08 16:01:44 crc kubenswrapper[4624]: I1008 16:01:44.532745 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1eb7d25-b6cd-421c-bcfd-512714920e9c-utilities\") pod \"community-operators-7gb88\" (UID: \"d1eb7d25-b6cd-421c-bcfd-512714920e9c\") " pod="openshift-marketplace/community-operators-7gb88" Oct 08 16:01:44 crc kubenswrapper[4624]: I1008 16:01:44.566491 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zq4g\" (UniqueName: \"kubernetes.io/projected/d1eb7d25-b6cd-421c-bcfd-512714920e9c-kube-api-access-8zq4g\") pod \"community-operators-7gb88\" (UID: \"d1eb7d25-b6cd-421c-bcfd-512714920e9c\") " pod="openshift-marketplace/community-operators-7gb88" Oct 08 16:01:44 crc kubenswrapper[4624]: I1008 16:01:44.621082 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gb88" Oct 08 16:01:45 crc kubenswrapper[4624]: I1008 16:01:45.218805 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7gb88"] Oct 08 16:01:45 crc kubenswrapper[4624]: I1008 16:01:45.437020 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gb88" event={"ID":"d1eb7d25-b6cd-421c-bcfd-512714920e9c","Type":"ContainerStarted","Data":"cd3955290e8242a8e37e76be763e2dafda3906042ba42520a594a3fe6093471b"} Oct 08 16:01:46 crc kubenswrapper[4624]: I1008 16:01:46.448424 4624 generic.go:334] "Generic (PLEG): container finished" podID="d1eb7d25-b6cd-421c-bcfd-512714920e9c" containerID="48f2426531794c9523e971980b73af5b0a657fa9183791db342992e1a6e5e357" exitCode=0 Oct 08 16:01:46 crc kubenswrapper[4624]: I1008 16:01:46.448760 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gb88" event={"ID":"d1eb7d25-b6cd-421c-bcfd-512714920e9c","Type":"ContainerDied","Data":"48f2426531794c9523e971980b73af5b0a657fa9183791db342992e1a6e5e357"} Oct 08 16:01:48 crc kubenswrapper[4624]: I1008 16:01:48.485337 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gb88" event={"ID":"d1eb7d25-b6cd-421c-bcfd-512714920e9c","Type":"ContainerStarted","Data":"43e9228a1ca999c39bc08da0c6477912497b2c695dc64534ebf90eb7628ac183"} Oct 08 16:01:49 crc kubenswrapper[4624]: I1008 16:01:49.495452 4624 generic.go:334] "Generic (PLEG): container finished" podID="d1eb7d25-b6cd-421c-bcfd-512714920e9c" containerID="43e9228a1ca999c39bc08da0c6477912497b2c695dc64534ebf90eb7628ac183" exitCode=0 Oct 08 16:01:49 crc kubenswrapper[4624]: I1008 16:01:49.495522 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gb88" event={"ID":"d1eb7d25-b6cd-421c-bcfd-512714920e9c","Type":"ContainerDied","Data":"43e9228a1ca999c39bc08da0c6477912497b2c695dc64534ebf90eb7628ac183"} Oct 08 16:01:51 crc kubenswrapper[4624]: I1008 16:01:51.524032 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gb88" event={"ID":"d1eb7d25-b6cd-421c-bcfd-512714920e9c","Type":"ContainerStarted","Data":"cad21824350f7eabdccef04e782cf5e23a303268beee1d7e1a3c82264c60e943"} Oct 08 16:01:51 crc kubenswrapper[4624]: I1008 16:01:51.558365 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7gb88" podStartSLOduration=3.315091167 podStartE2EDuration="7.558336844s" podCreationTimestamp="2025-10-08 16:01:44 +0000 UTC" firstStartedPulling="2025-10-08 16:01:46.450938888 +0000 UTC m=+5931.601873965" lastFinishedPulling="2025-10-08 16:01:50.694184565 +0000 UTC m=+5935.845119642" observedRunningTime="2025-10-08 16:01:51.553223813 +0000 UTC m=+5936.704158890" watchObservedRunningTime="2025-10-08 16:01:51.558336844 +0000 UTC m=+5936.709271921" Oct 08 16:01:52 crc kubenswrapper[4624]: I1008 16:01:52.468063 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 16:01:52 crc kubenswrapper[4624]: E1008 16:01:52.468363 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:01:54 crc kubenswrapper[4624]: I1008 16:01:54.622210 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7gb88" Oct 08 16:01:54 crc kubenswrapper[4624]: I1008 16:01:54.623220 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7gb88" Oct 08 16:01:55 crc kubenswrapper[4624]: I1008 16:01:55.677735 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-7gb88" podUID="d1eb7d25-b6cd-421c-bcfd-512714920e9c" containerName="registry-server" probeResult="failure" output=< Oct 08 16:01:55 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:01:55 crc kubenswrapper[4624]: > Oct 08 16:02:04 crc kubenswrapper[4624]: I1008 16:02:04.675494 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7gb88" Oct 08 16:02:04 crc kubenswrapper[4624]: I1008 16:02:04.752232 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7gb88" Oct 08 16:02:04 crc kubenswrapper[4624]: I1008 16:02:04.929078 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7gb88"] Oct 08 16:02:05 crc kubenswrapper[4624]: I1008 16:02:05.472945 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 16:02:05 crc kubenswrapper[4624]: E1008 16:02:05.473378 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:02:06 crc kubenswrapper[4624]: I1008 16:02:06.699697 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7gb88" podUID="d1eb7d25-b6cd-421c-bcfd-512714920e9c" containerName="registry-server" containerID="cri-o://cad21824350f7eabdccef04e782cf5e23a303268beee1d7e1a3c82264c60e943" gracePeriod=2 Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.313242 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gb88" Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.403794 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1eb7d25-b6cd-421c-bcfd-512714920e9c-utilities\") pod \"d1eb7d25-b6cd-421c-bcfd-512714920e9c\" (UID: \"d1eb7d25-b6cd-421c-bcfd-512714920e9c\") " Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.404294 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1eb7d25-b6cd-421c-bcfd-512714920e9c-catalog-content\") pod \"d1eb7d25-b6cd-421c-bcfd-512714920e9c\" (UID: \"d1eb7d25-b6cd-421c-bcfd-512714920e9c\") " Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.404344 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zq4g\" (UniqueName: \"kubernetes.io/projected/d1eb7d25-b6cd-421c-bcfd-512714920e9c-kube-api-access-8zq4g\") pod \"d1eb7d25-b6cd-421c-bcfd-512714920e9c\" (UID: \"d1eb7d25-b6cd-421c-bcfd-512714920e9c\") " Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.405220 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1eb7d25-b6cd-421c-bcfd-512714920e9c-utilities" (OuterVolumeSpecName: "utilities") pod "d1eb7d25-b6cd-421c-bcfd-512714920e9c" (UID: "d1eb7d25-b6cd-421c-bcfd-512714920e9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.405924 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1eb7d25-b6cd-421c-bcfd-512714920e9c-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.421946 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1eb7d25-b6cd-421c-bcfd-512714920e9c-kube-api-access-8zq4g" (OuterVolumeSpecName: "kube-api-access-8zq4g") pod "d1eb7d25-b6cd-421c-bcfd-512714920e9c" (UID: "d1eb7d25-b6cd-421c-bcfd-512714920e9c"). InnerVolumeSpecName "kube-api-access-8zq4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.457468 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1eb7d25-b6cd-421c-bcfd-512714920e9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1eb7d25-b6cd-421c-bcfd-512714920e9c" (UID: "d1eb7d25-b6cd-421c-bcfd-512714920e9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.508982 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1eb7d25-b6cd-421c-bcfd-512714920e9c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.509021 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zq4g\" (UniqueName: \"kubernetes.io/projected/d1eb7d25-b6cd-421c-bcfd-512714920e9c-kube-api-access-8zq4g\") on node \"crc\" DevicePath \"\"" Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.716973 4624 generic.go:334] "Generic (PLEG): container finished" podID="d1eb7d25-b6cd-421c-bcfd-512714920e9c" containerID="cad21824350f7eabdccef04e782cf5e23a303268beee1d7e1a3c82264c60e943" exitCode=0 Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.717056 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gb88" event={"ID":"d1eb7d25-b6cd-421c-bcfd-512714920e9c","Type":"ContainerDied","Data":"cad21824350f7eabdccef04e782cf5e23a303268beee1d7e1a3c82264c60e943"} Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.717109 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gb88" event={"ID":"d1eb7d25-b6cd-421c-bcfd-512714920e9c","Type":"ContainerDied","Data":"cd3955290e8242a8e37e76be763e2dafda3906042ba42520a594a3fe6093471b"} Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.717142 4624 scope.go:117] "RemoveContainer" containerID="cad21824350f7eabdccef04e782cf5e23a303268beee1d7e1a3c82264c60e943" Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.717426 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gb88" Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.762815 4624 scope.go:117] "RemoveContainer" containerID="43e9228a1ca999c39bc08da0c6477912497b2c695dc64534ebf90eb7628ac183" Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.764039 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7gb88"] Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.774991 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7gb88"] Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.806076 4624 scope.go:117] "RemoveContainer" containerID="48f2426531794c9523e971980b73af5b0a657fa9183791db342992e1a6e5e357" Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.849800 4624 scope.go:117] "RemoveContainer" containerID="cad21824350f7eabdccef04e782cf5e23a303268beee1d7e1a3c82264c60e943" Oct 08 16:02:07 crc kubenswrapper[4624]: E1008 16:02:07.850373 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cad21824350f7eabdccef04e782cf5e23a303268beee1d7e1a3c82264c60e943\": container with ID starting with cad21824350f7eabdccef04e782cf5e23a303268beee1d7e1a3c82264c60e943 not found: ID does not exist" containerID="cad21824350f7eabdccef04e782cf5e23a303268beee1d7e1a3c82264c60e943" Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.850452 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cad21824350f7eabdccef04e782cf5e23a303268beee1d7e1a3c82264c60e943"} err="failed to get container status \"cad21824350f7eabdccef04e782cf5e23a303268beee1d7e1a3c82264c60e943\": rpc error: code = NotFound desc = could not find container \"cad21824350f7eabdccef04e782cf5e23a303268beee1d7e1a3c82264c60e943\": container with ID starting with cad21824350f7eabdccef04e782cf5e23a303268beee1d7e1a3c82264c60e943 not found: ID does not exist" Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.850501 4624 scope.go:117] "RemoveContainer" containerID="43e9228a1ca999c39bc08da0c6477912497b2c695dc64534ebf90eb7628ac183" Oct 08 16:02:07 crc kubenswrapper[4624]: E1008 16:02:07.851015 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43e9228a1ca999c39bc08da0c6477912497b2c695dc64534ebf90eb7628ac183\": container with ID starting with 43e9228a1ca999c39bc08da0c6477912497b2c695dc64534ebf90eb7628ac183 not found: ID does not exist" containerID="43e9228a1ca999c39bc08da0c6477912497b2c695dc64534ebf90eb7628ac183" Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.851056 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43e9228a1ca999c39bc08da0c6477912497b2c695dc64534ebf90eb7628ac183"} err="failed to get container status \"43e9228a1ca999c39bc08da0c6477912497b2c695dc64534ebf90eb7628ac183\": rpc error: code = NotFound desc = could not find container \"43e9228a1ca999c39bc08da0c6477912497b2c695dc64534ebf90eb7628ac183\": container with ID starting with 43e9228a1ca999c39bc08da0c6477912497b2c695dc64534ebf90eb7628ac183 not found: ID does not exist" Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.851086 4624 scope.go:117] "RemoveContainer" containerID="48f2426531794c9523e971980b73af5b0a657fa9183791db342992e1a6e5e357" Oct 08 16:02:07 crc kubenswrapper[4624]: E1008 16:02:07.851570 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48f2426531794c9523e971980b73af5b0a657fa9183791db342992e1a6e5e357\": container with ID starting with 48f2426531794c9523e971980b73af5b0a657fa9183791db342992e1a6e5e357 not found: ID does not exist" containerID="48f2426531794c9523e971980b73af5b0a657fa9183791db342992e1a6e5e357" Oct 08 16:02:07 crc kubenswrapper[4624]: I1008 16:02:07.851594 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f2426531794c9523e971980b73af5b0a657fa9183791db342992e1a6e5e357"} err="failed to get container status \"48f2426531794c9523e971980b73af5b0a657fa9183791db342992e1a6e5e357\": rpc error: code = NotFound desc = could not find container \"48f2426531794c9523e971980b73af5b0a657fa9183791db342992e1a6e5e357\": container with ID starting with 48f2426531794c9523e971980b73af5b0a657fa9183791db342992e1a6e5e357 not found: ID does not exist" Oct 08 16:02:09 crc kubenswrapper[4624]: I1008 16:02:09.478070 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1eb7d25-b6cd-421c-bcfd-512714920e9c" path="/var/lib/kubelet/pods/d1eb7d25-b6cd-421c-bcfd-512714920e9c/volumes" Oct 08 16:02:17 crc kubenswrapper[4624]: I1008 16:02:17.466538 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 16:02:17 crc kubenswrapper[4624]: E1008 16:02:17.468669 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:02:30 crc kubenswrapper[4624]: I1008 16:02:30.466066 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 16:02:30 crc kubenswrapper[4624]: E1008 16:02:30.466778 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:02:41 crc kubenswrapper[4624]: I1008 16:02:41.466258 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 16:02:41 crc kubenswrapper[4624]: E1008 16:02:41.467374 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.564115 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d887b9d4f-whfwt"] Oct 08 16:02:43 crc kubenswrapper[4624]: E1008 16:02:43.564842 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1eb7d25-b6cd-421c-bcfd-512714920e9c" containerName="extract-utilities" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.564857 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1eb7d25-b6cd-421c-bcfd-512714920e9c" containerName="extract-utilities" Oct 08 16:02:43 crc kubenswrapper[4624]: E1008 16:02:43.564864 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1eb7d25-b6cd-421c-bcfd-512714920e9c" containerName="extract-content" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.564870 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1eb7d25-b6cd-421c-bcfd-512714920e9c" containerName="extract-content" Oct 08 16:02:43 crc kubenswrapper[4624]: E1008 16:02:43.564899 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1eb7d25-b6cd-421c-bcfd-512714920e9c" containerName="registry-server" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.564905 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1eb7d25-b6cd-421c-bcfd-512714920e9c" containerName="registry-server" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.565102 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1eb7d25-b6cd-421c-bcfd-512714920e9c" containerName="registry-server" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.566090 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.585161 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d887b9d4f-whfwt"] Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.645074 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-combined-ca-bundle\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.645145 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-config\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.645209 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-internal-tls-certs\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.645336 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-ovndb-tls-certs\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.645535 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-httpd-config\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.645773 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-public-tls-certs\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.645808 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2mnf\" (UniqueName: \"kubernetes.io/projected/94badb37-ba00-48fa-a728-26d834b5409c-kube-api-access-m2mnf\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.747120 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-public-tls-certs\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.747161 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2mnf\" (UniqueName: \"kubernetes.io/projected/94badb37-ba00-48fa-a728-26d834b5409c-kube-api-access-m2mnf\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.747281 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-combined-ca-bundle\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.747314 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-config\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.747359 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-internal-tls-certs\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.747394 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-ovndb-tls-certs\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.747425 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-httpd-config\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.756947 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-httpd-config\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.757018 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-combined-ca-bundle\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.757229 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-internal-tls-certs\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.761304 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-ovndb-tls-certs\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.764509 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-config\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.773627 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2mnf\" (UniqueName: \"kubernetes.io/projected/94badb37-ba00-48fa-a728-26d834b5409c-kube-api-access-m2mnf\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.783408 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-public-tls-certs\") pod \"neutron-d887b9d4f-whfwt\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:43 crc kubenswrapper[4624]: I1008 16:02:43.923177 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:44 crc kubenswrapper[4624]: I1008 16:02:44.730102 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d887b9d4f-whfwt"] Oct 08 16:02:45 crc kubenswrapper[4624]: I1008 16:02:45.088152 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d887b9d4f-whfwt" event={"ID":"94badb37-ba00-48fa-a728-26d834b5409c","Type":"ContainerStarted","Data":"c508d08262a4c5eff925233fbeff6b8b13fb443481f2708592247698b56afd70"} Oct 08 16:02:45 crc kubenswrapper[4624]: I1008 16:02:45.088613 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d887b9d4f-whfwt" event={"ID":"94badb37-ba00-48fa-a728-26d834b5409c","Type":"ContainerStarted","Data":"4331dba694671eb7b541c261c4f6e28127bd08b530ff4cffdf901a4707d05421"} Oct 08 16:02:46 crc kubenswrapper[4624]: I1008 16:02:46.103545 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d887b9d4f-whfwt" event={"ID":"94badb37-ba00-48fa-a728-26d834b5409c","Type":"ContainerStarted","Data":"d27619c3fec1d7e9dd07d966cc9f81b5d6f7e38ac246e8570bd520a498d1b350"} Oct 08 16:02:46 crc kubenswrapper[4624]: I1008 16:02:46.104858 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:02:46 crc kubenswrapper[4624]: I1008 16:02:46.131606 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d887b9d4f-whfwt" podStartSLOduration=3.131577597 podStartE2EDuration="3.131577597s" podCreationTimestamp="2025-10-08 16:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 16:02:46.127274687 +0000 UTC m=+5991.278209804" watchObservedRunningTime="2025-10-08 16:02:46.131577597 +0000 UTC m=+5991.282512674" Oct 08 16:02:55 crc kubenswrapper[4624]: I1008 16:02:55.473836 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 16:02:55 crc kubenswrapper[4624]: E1008 16:02:55.474619 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:03:08 crc kubenswrapper[4624]: I1008 16:03:08.465482 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 16:03:08 crc kubenswrapper[4624]: E1008 16:03:08.466411 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:03:13 crc kubenswrapper[4624]: I1008 16:03:13.959340 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:03:14 crc kubenswrapper[4624]: I1008 16:03:14.064590 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7bb5f9bf4f-nll8n"] Oct 08 16:03:14 crc kubenswrapper[4624]: I1008 16:03:14.065103 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7bb5f9bf4f-nll8n" podUID="a8614612-c6f9-452f-a9b7-a47bce32ac81" containerName="neutron-api" containerID="cri-o://578bf5df14c17d5ac2f6597c76c5f129fc5c7e23bee9b01130e478e8bfa328c3" gracePeriod=30 Oct 08 16:03:14 crc kubenswrapper[4624]: I1008 16:03:14.065609 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7bb5f9bf4f-nll8n" podUID="a8614612-c6f9-452f-a9b7-a47bce32ac81" containerName="neutron-httpd" containerID="cri-o://a8ffbbec137d44b9295950520a3d3047cea34ef6e3443a42d6f4e26ad25f8de4" gracePeriod=30 Oct 08 16:03:14 crc kubenswrapper[4624]: I1008 16:03:14.345219 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8614612-c6f9-452f-a9b7-a47bce32ac81" containerID="a8ffbbec137d44b9295950520a3d3047cea34ef6e3443a42d6f4e26ad25f8de4" exitCode=0 Oct 08 16:03:14 crc kubenswrapper[4624]: I1008 16:03:14.345277 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bb5f9bf4f-nll8n" event={"ID":"a8614612-c6f9-452f-a9b7-a47bce32ac81","Type":"ContainerDied","Data":"a8ffbbec137d44b9295950520a3d3047cea34ef6e3443a42d6f4e26ad25f8de4"} Oct 08 16:03:15 crc kubenswrapper[4624]: I1008 16:03:15.837900 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7bb5f9bf4f-nll8n" podUID="a8614612-c6f9-452f-a9b7-a47bce32ac81" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.159:9696/\": dial tcp 10.217.0.159:9696: connect: connection refused" Oct 08 16:03:19 crc kubenswrapper[4624]: I1008 16:03:19.475386 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 16:03:19 crc kubenswrapper[4624]: E1008 16:03:19.476937 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:03:27 crc kubenswrapper[4624]: I1008 16:03:27.496966 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8614612-c6f9-452f-a9b7-a47bce32ac81" containerID="578bf5df14c17d5ac2f6597c76c5f129fc5c7e23bee9b01130e478e8bfa328c3" exitCode=0 Oct 08 16:03:27 crc kubenswrapper[4624]: I1008 16:03:27.497534 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bb5f9bf4f-nll8n" event={"ID":"a8614612-c6f9-452f-a9b7-a47bce32ac81","Type":"ContainerDied","Data":"578bf5df14c17d5ac2f6597c76c5f129fc5c7e23bee9b01130e478e8bfa328c3"} Oct 08 16:03:29 crc kubenswrapper[4624]: I1008 16:03:29.688013 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 16:03:29 crc kubenswrapper[4624]: I1008 16:03:29.826492 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-ovndb-tls-certs\") pod \"a8614612-c6f9-452f-a9b7-a47bce32ac81\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " Oct 08 16:03:29 crc kubenswrapper[4624]: I1008 16:03:29.826581 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-public-tls-certs\") pod \"a8614612-c6f9-452f-a9b7-a47bce32ac81\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " Oct 08 16:03:29 crc kubenswrapper[4624]: I1008 16:03:29.826662 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-combined-ca-bundle\") pod \"a8614612-c6f9-452f-a9b7-a47bce32ac81\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " Oct 08 16:03:29 crc kubenswrapper[4624]: I1008 16:03:29.826718 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-internal-tls-certs\") pod \"a8614612-c6f9-452f-a9b7-a47bce32ac81\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " Oct 08 16:03:29 crc kubenswrapper[4624]: I1008 16:03:29.826788 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-httpd-config\") pod \"a8614612-c6f9-452f-a9b7-a47bce32ac81\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " Oct 08 16:03:29 crc kubenswrapper[4624]: I1008 16:03:29.826813 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-config\") pod \"a8614612-c6f9-452f-a9b7-a47bce32ac81\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " Oct 08 16:03:29 crc kubenswrapper[4624]: I1008 16:03:29.828622 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtw8c\" (UniqueName: \"kubernetes.io/projected/a8614612-c6f9-452f-a9b7-a47bce32ac81-kube-api-access-mtw8c\") pod \"a8614612-c6f9-452f-a9b7-a47bce32ac81\" (UID: \"a8614612-c6f9-452f-a9b7-a47bce32ac81\") " Oct 08 16:03:29 crc kubenswrapper[4624]: I1008 16:03:29.936796 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8614612-c6f9-452f-a9b7-a47bce32ac81-kube-api-access-mtw8c" (OuterVolumeSpecName: "kube-api-access-mtw8c") pod "a8614612-c6f9-452f-a9b7-a47bce32ac81" (UID: "a8614612-c6f9-452f-a9b7-a47bce32ac81"). InnerVolumeSpecName "kube-api-access-mtw8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:03:29 crc kubenswrapper[4624]: I1008 16:03:29.944998 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a8614612-c6f9-452f-a9b7-a47bce32ac81" (UID: "a8614612-c6f9-452f-a9b7-a47bce32ac81"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:03:30 crc kubenswrapper[4624]: I1008 16:03:30.006824 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-config" (OuterVolumeSpecName: "config") pod "a8614612-c6f9-452f-a9b7-a47bce32ac81" (UID: "a8614612-c6f9-452f-a9b7-a47bce32ac81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:03:30 crc kubenswrapper[4624]: I1008 16:03:30.019586 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8614612-c6f9-452f-a9b7-a47bce32ac81" (UID: "a8614612-c6f9-452f-a9b7-a47bce32ac81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:03:30 crc kubenswrapper[4624]: I1008 16:03:30.022100 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a8614612-c6f9-452f-a9b7-a47bce32ac81" (UID: "a8614612-c6f9-452f-a9b7-a47bce32ac81"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:03:30 crc kubenswrapper[4624]: I1008 16:03:30.035083 4624 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 16:03:30 crc kubenswrapper[4624]: I1008 16:03:30.035115 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 16:03:30 crc kubenswrapper[4624]: I1008 16:03:30.035125 4624 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 08 16:03:30 crc kubenswrapper[4624]: I1008 16:03:30.035137 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-config\") on node \"crc\" DevicePath \"\"" Oct 08 16:03:30 crc kubenswrapper[4624]: I1008 16:03:30.035148 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtw8c\" (UniqueName: \"kubernetes.io/projected/a8614612-c6f9-452f-a9b7-a47bce32ac81-kube-api-access-mtw8c\") on node \"crc\" DevicePath \"\"" Oct 08 16:03:30 crc kubenswrapper[4624]: I1008 16:03:30.046952 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a8614612-c6f9-452f-a9b7-a47bce32ac81" (UID: "a8614612-c6f9-452f-a9b7-a47bce32ac81"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:03:30 crc kubenswrapper[4624]: I1008 16:03:30.084450 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a8614612-c6f9-452f-a9b7-a47bce32ac81" (UID: "a8614612-c6f9-452f-a9b7-a47bce32ac81"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:03:30 crc kubenswrapper[4624]: I1008 16:03:30.136452 4624 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 16:03:30 crc kubenswrapper[4624]: I1008 16:03:30.136491 4624 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8614612-c6f9-452f-a9b7-a47bce32ac81-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 16:03:30 crc kubenswrapper[4624]: I1008 16:03:30.536771 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bb5f9bf4f-nll8n" event={"ID":"a8614612-c6f9-452f-a9b7-a47bce32ac81","Type":"ContainerDied","Data":"74481dbc2dc69fb1c27a81597ea1faf84dc0d0a987874115e56172cc7348077d"} Oct 08 16:03:30 crc kubenswrapper[4624]: I1008 16:03:30.536859 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bb5f9bf4f-nll8n" Oct 08 16:03:30 crc kubenswrapper[4624]: I1008 16:03:30.537387 4624 scope.go:117] "RemoveContainer" containerID="a8ffbbec137d44b9295950520a3d3047cea34ef6e3443a42d6f4e26ad25f8de4" Oct 08 16:03:30 crc kubenswrapper[4624]: I1008 16:03:30.588354 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7bb5f9bf4f-nll8n"] Oct 08 16:03:30 crc kubenswrapper[4624]: I1008 16:03:30.599279 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7bb5f9bf4f-nll8n"] Oct 08 16:03:30 crc kubenswrapper[4624]: I1008 16:03:30.631842 4624 scope.go:117] "RemoveContainer" containerID="578bf5df14c17d5ac2f6597c76c5f129fc5c7e23bee9b01130e478e8bfa328c3" Oct 08 16:03:31 crc kubenswrapper[4624]: I1008 16:03:31.480133 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8614612-c6f9-452f-a9b7-a47bce32ac81" path="/var/lib/kubelet/pods/a8614612-c6f9-452f-a9b7-a47bce32ac81/volumes" Oct 08 16:03:34 crc kubenswrapper[4624]: I1008 16:03:34.465625 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 16:03:35 crc kubenswrapper[4624]: I1008 16:03:35.599019 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"4dd7282757c330a044cf36d0a5cacc7696b29b92fdb83bd080e2547865f11d79"} Oct 08 16:04:35 crc kubenswrapper[4624]: I1008 16:04:35.940918 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xkgl8"] Oct 08 16:04:35 crc kubenswrapper[4624]: E1008 16:04:35.942515 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8614612-c6f9-452f-a9b7-a47bce32ac81" containerName="neutron-api" Oct 08 16:04:35 crc kubenswrapper[4624]: I1008 16:04:35.942536 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8614612-c6f9-452f-a9b7-a47bce32ac81" containerName="neutron-api" Oct 08 16:04:35 crc kubenswrapper[4624]: E1008 16:04:35.942563 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8614612-c6f9-452f-a9b7-a47bce32ac81" containerName="neutron-httpd" Oct 08 16:04:35 crc kubenswrapper[4624]: I1008 16:04:35.942572 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8614612-c6f9-452f-a9b7-a47bce32ac81" containerName="neutron-httpd" Oct 08 16:04:35 crc kubenswrapper[4624]: I1008 16:04:35.942803 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8614612-c6f9-452f-a9b7-a47bce32ac81" containerName="neutron-api" Oct 08 16:04:35 crc kubenswrapper[4624]: I1008 16:04:35.942839 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8614612-c6f9-452f-a9b7-a47bce32ac81" containerName="neutron-httpd" Oct 08 16:04:35 crc kubenswrapper[4624]: I1008 16:04:35.945661 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xkgl8" Oct 08 16:04:35 crc kubenswrapper[4624]: I1008 16:04:35.956187 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xkgl8"] Oct 08 16:04:35 crc kubenswrapper[4624]: I1008 16:04:35.998154 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5sx6\" (UniqueName: \"kubernetes.io/projected/4ca30cf8-9906-46b4-90e7-32a8624c26b1-kube-api-access-j5sx6\") pod \"redhat-operators-xkgl8\" (UID: \"4ca30cf8-9906-46b4-90e7-32a8624c26b1\") " pod="openshift-marketplace/redhat-operators-xkgl8" Oct 08 16:04:35 crc kubenswrapper[4624]: I1008 16:04:35.998318 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ca30cf8-9906-46b4-90e7-32a8624c26b1-catalog-content\") pod \"redhat-operators-xkgl8\" (UID: \"4ca30cf8-9906-46b4-90e7-32a8624c26b1\") " pod="openshift-marketplace/redhat-operators-xkgl8" Oct 08 16:04:35 crc kubenswrapper[4624]: I1008 16:04:35.998375 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ca30cf8-9906-46b4-90e7-32a8624c26b1-utilities\") pod \"redhat-operators-xkgl8\" (UID: \"4ca30cf8-9906-46b4-90e7-32a8624c26b1\") " pod="openshift-marketplace/redhat-operators-xkgl8" Oct 08 16:04:36 crc kubenswrapper[4624]: I1008 16:04:36.100296 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5sx6\" (UniqueName: \"kubernetes.io/projected/4ca30cf8-9906-46b4-90e7-32a8624c26b1-kube-api-access-j5sx6\") pod \"redhat-operators-xkgl8\" (UID: \"4ca30cf8-9906-46b4-90e7-32a8624c26b1\") " pod="openshift-marketplace/redhat-operators-xkgl8" Oct 08 16:04:36 crc kubenswrapper[4624]: I1008 16:04:36.100450 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ca30cf8-9906-46b4-90e7-32a8624c26b1-catalog-content\") pod \"redhat-operators-xkgl8\" (UID: \"4ca30cf8-9906-46b4-90e7-32a8624c26b1\") " pod="openshift-marketplace/redhat-operators-xkgl8" Oct 08 16:04:36 crc kubenswrapper[4624]: I1008 16:04:36.100522 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ca30cf8-9906-46b4-90e7-32a8624c26b1-utilities\") pod \"redhat-operators-xkgl8\" (UID: \"4ca30cf8-9906-46b4-90e7-32a8624c26b1\") " pod="openshift-marketplace/redhat-operators-xkgl8" Oct 08 16:04:36 crc kubenswrapper[4624]: I1008 16:04:36.103588 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ca30cf8-9906-46b4-90e7-32a8624c26b1-utilities\") pod \"redhat-operators-xkgl8\" (UID: \"4ca30cf8-9906-46b4-90e7-32a8624c26b1\") " pod="openshift-marketplace/redhat-operators-xkgl8" Oct 08 16:04:36 crc kubenswrapper[4624]: I1008 16:04:36.103828 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ca30cf8-9906-46b4-90e7-32a8624c26b1-catalog-content\") pod \"redhat-operators-xkgl8\" (UID: \"4ca30cf8-9906-46b4-90e7-32a8624c26b1\") " pod="openshift-marketplace/redhat-operators-xkgl8" Oct 08 16:04:36 crc kubenswrapper[4624]: I1008 16:04:36.128250 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5sx6\" (UniqueName: \"kubernetes.io/projected/4ca30cf8-9906-46b4-90e7-32a8624c26b1-kube-api-access-j5sx6\") pod \"redhat-operators-xkgl8\" (UID: \"4ca30cf8-9906-46b4-90e7-32a8624c26b1\") " pod="openshift-marketplace/redhat-operators-xkgl8" Oct 08 16:04:36 crc kubenswrapper[4624]: I1008 16:04:36.270204 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xkgl8" Oct 08 16:04:37 crc kubenswrapper[4624]: I1008 16:04:37.034103 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xkgl8"] Oct 08 16:04:37 crc kubenswrapper[4624]: I1008 16:04:37.322193 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkgl8" event={"ID":"4ca30cf8-9906-46b4-90e7-32a8624c26b1","Type":"ContainerStarted","Data":"6054ffa4176b98b18988481def35115ac7e58a1f2032b7af673bb78e48064036"} Oct 08 16:04:38 crc kubenswrapper[4624]: I1008 16:04:38.336750 4624 generic.go:334] "Generic (PLEG): container finished" podID="4ca30cf8-9906-46b4-90e7-32a8624c26b1" containerID="cfff15c696c260f8a2cb9b952bbfe005af7452ce94a3fb2e7b6a49603cfcaa6e" exitCode=0 Oct 08 16:04:38 crc kubenswrapper[4624]: I1008 16:04:38.336926 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkgl8" event={"ID":"4ca30cf8-9906-46b4-90e7-32a8624c26b1","Type":"ContainerDied","Data":"cfff15c696c260f8a2cb9b952bbfe005af7452ce94a3fb2e7b6a49603cfcaa6e"} Oct 08 16:04:38 crc kubenswrapper[4624]: I1008 16:04:38.344697 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 16:04:40 crc kubenswrapper[4624]: I1008 16:04:40.362863 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkgl8" event={"ID":"4ca30cf8-9906-46b4-90e7-32a8624c26b1","Type":"ContainerStarted","Data":"ad0fa0dab182160f96cf2de4279cff0d883ebe7850b326b095bdd07df82611e5"} Oct 08 16:04:44 crc kubenswrapper[4624]: I1008 16:04:44.435835 4624 generic.go:334] "Generic (PLEG): container finished" podID="4ca30cf8-9906-46b4-90e7-32a8624c26b1" containerID="ad0fa0dab182160f96cf2de4279cff0d883ebe7850b326b095bdd07df82611e5" exitCode=0 Oct 08 16:04:44 crc kubenswrapper[4624]: I1008 16:04:44.435934 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkgl8" event={"ID":"4ca30cf8-9906-46b4-90e7-32a8624c26b1","Type":"ContainerDied","Data":"ad0fa0dab182160f96cf2de4279cff0d883ebe7850b326b095bdd07df82611e5"} Oct 08 16:04:45 crc kubenswrapper[4624]: I1008 16:04:45.454539 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkgl8" event={"ID":"4ca30cf8-9906-46b4-90e7-32a8624c26b1","Type":"ContainerStarted","Data":"ce8a2b0588c2861e677f0d1af74cf3c5ac514c9c09311d71e5abe80c6bfd940b"} Oct 08 16:04:45 crc kubenswrapper[4624]: I1008 16:04:45.483128 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xkgl8" podStartSLOduration=3.8685249649999998 podStartE2EDuration="10.483095617s" podCreationTimestamp="2025-10-08 16:04:35 +0000 UTC" firstStartedPulling="2025-10-08 16:04:38.344387137 +0000 UTC m=+6103.495322214" lastFinishedPulling="2025-10-08 16:04:44.958957789 +0000 UTC m=+6110.109892866" observedRunningTime="2025-10-08 16:04:45.475903114 +0000 UTC m=+6110.626838191" watchObservedRunningTime="2025-10-08 16:04:45.483095617 +0000 UTC m=+6110.634030694" Oct 08 16:04:46 crc kubenswrapper[4624]: I1008 16:04:46.270718 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xkgl8" Oct 08 16:04:46 crc kubenswrapper[4624]: I1008 16:04:46.271277 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xkgl8" Oct 08 16:04:47 crc kubenswrapper[4624]: I1008 16:04:47.319709 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xkgl8" podUID="4ca30cf8-9906-46b4-90e7-32a8624c26b1" containerName="registry-server" probeResult="failure" output=< Oct 08 16:04:47 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:04:47 crc kubenswrapper[4624]: > Oct 08 16:04:57 crc kubenswrapper[4624]: I1008 16:04:57.320882 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xkgl8" podUID="4ca30cf8-9906-46b4-90e7-32a8624c26b1" containerName="registry-server" probeResult="failure" output=< Oct 08 16:04:57 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:04:57 crc kubenswrapper[4624]: > Oct 08 16:05:07 crc kubenswrapper[4624]: I1008 16:05:07.315930 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xkgl8" podUID="4ca30cf8-9906-46b4-90e7-32a8624c26b1" containerName="registry-server" probeResult="failure" output=< Oct 08 16:05:07 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:05:07 crc kubenswrapper[4624]: > Oct 08 16:05:12 crc kubenswrapper[4624]: I1008 16:05:12.534860 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nxvm9"] Oct 08 16:05:12 crc kubenswrapper[4624]: I1008 16:05:12.538785 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxvm9" Oct 08 16:05:12 crc kubenswrapper[4624]: I1008 16:05:12.554301 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nxvm9"] Oct 08 16:05:12 crc kubenswrapper[4624]: I1008 16:05:12.693612 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba-catalog-content\") pod \"certified-operators-nxvm9\" (UID: \"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba\") " pod="openshift-marketplace/certified-operators-nxvm9" Oct 08 16:05:12 crc kubenswrapper[4624]: I1008 16:05:12.694132 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65jhv\" (UniqueName: \"kubernetes.io/projected/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba-kube-api-access-65jhv\") pod \"certified-operators-nxvm9\" (UID: \"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba\") " pod="openshift-marketplace/certified-operators-nxvm9" Oct 08 16:05:12 crc kubenswrapper[4624]: I1008 16:05:12.694213 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba-utilities\") pod \"certified-operators-nxvm9\" (UID: \"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba\") " pod="openshift-marketplace/certified-operators-nxvm9" Oct 08 16:05:12 crc kubenswrapper[4624]: I1008 16:05:12.796592 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65jhv\" (UniqueName: \"kubernetes.io/projected/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba-kube-api-access-65jhv\") pod \"certified-operators-nxvm9\" (UID: \"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba\") " pod="openshift-marketplace/certified-operators-nxvm9" Oct 08 16:05:12 crc kubenswrapper[4624]: I1008 16:05:12.796790 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba-utilities\") pod \"certified-operators-nxvm9\" (UID: \"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba\") " pod="openshift-marketplace/certified-operators-nxvm9" Oct 08 16:05:12 crc kubenswrapper[4624]: I1008 16:05:12.796833 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba-catalog-content\") pod \"certified-operators-nxvm9\" (UID: \"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba\") " pod="openshift-marketplace/certified-operators-nxvm9" Oct 08 16:05:12 crc kubenswrapper[4624]: I1008 16:05:12.797509 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba-utilities\") pod \"certified-operators-nxvm9\" (UID: \"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba\") " pod="openshift-marketplace/certified-operators-nxvm9" Oct 08 16:05:12 crc kubenswrapper[4624]: I1008 16:05:12.797541 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba-catalog-content\") pod \"certified-operators-nxvm9\" (UID: \"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba\") " pod="openshift-marketplace/certified-operators-nxvm9" Oct 08 16:05:12 crc kubenswrapper[4624]: I1008 16:05:12.829311 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65jhv\" (UniqueName: \"kubernetes.io/projected/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba-kube-api-access-65jhv\") pod \"certified-operators-nxvm9\" (UID: \"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba\") " pod="openshift-marketplace/certified-operators-nxvm9" Oct 08 16:05:12 crc kubenswrapper[4624]: I1008 16:05:12.861617 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxvm9" Oct 08 16:05:13 crc kubenswrapper[4624]: I1008 16:05:13.614421 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nxvm9"] Oct 08 16:05:13 crc kubenswrapper[4624]: I1008 16:05:13.763941 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxvm9" event={"ID":"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba","Type":"ContainerStarted","Data":"8742a63866faf33b7857e10c0718cdcaa2758b5ec71d0ae8d260430cf1a8c40d"} Oct 08 16:05:14 crc kubenswrapper[4624]: I1008 16:05:14.789300 4624 generic.go:334] "Generic (PLEG): container finished" podID="7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba" containerID="5c7af1ca32dce2f4005e2f3cd4f90ff7f0c18a2022cb96edbe6dc5195ebbd3b3" exitCode=0 Oct 08 16:05:14 crc kubenswrapper[4624]: I1008 16:05:14.789372 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxvm9" event={"ID":"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba","Type":"ContainerDied","Data":"5c7af1ca32dce2f4005e2f3cd4f90ff7f0c18a2022cb96edbe6dc5195ebbd3b3"} Oct 08 16:05:15 crc kubenswrapper[4624]: I1008 16:05:15.801266 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxvm9" event={"ID":"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba","Type":"ContainerStarted","Data":"4df986242c177ec26773df46c8c15ac2c059b9eac4e7db26c0cbf8f95d6e3bc1"} Oct 08 16:05:16 crc kubenswrapper[4624]: I1008 16:05:16.328765 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xkgl8" Oct 08 16:05:16 crc kubenswrapper[4624]: I1008 16:05:16.391415 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xkgl8" Oct 08 16:05:17 crc kubenswrapper[4624]: I1008 16:05:17.833792 4624 generic.go:334] "Generic (PLEG): container finished" podID="7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba" containerID="4df986242c177ec26773df46c8c15ac2c059b9eac4e7db26c0cbf8f95d6e3bc1" exitCode=0 Oct 08 16:05:17 crc kubenswrapper[4624]: I1008 16:05:17.833902 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxvm9" event={"ID":"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba","Type":"ContainerDied","Data":"4df986242c177ec26773df46c8c15ac2c059b9eac4e7db26c0cbf8f95d6e3bc1"} Oct 08 16:05:17 crc kubenswrapper[4624]: I1008 16:05:17.907406 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xkgl8"] Oct 08 16:05:17 crc kubenswrapper[4624]: I1008 16:05:17.907685 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xkgl8" podUID="4ca30cf8-9906-46b4-90e7-32a8624c26b1" containerName="registry-server" containerID="cri-o://ce8a2b0588c2861e677f0d1af74cf3c5ac514c9c09311d71e5abe80c6bfd940b" gracePeriod=2 Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.560744 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xkgl8" Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.656995 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ca30cf8-9906-46b4-90e7-32a8624c26b1-utilities" (OuterVolumeSpecName: "utilities") pod "4ca30cf8-9906-46b4-90e7-32a8624c26b1" (UID: "4ca30cf8-9906-46b4-90e7-32a8624c26b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.657293 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ca30cf8-9906-46b4-90e7-32a8624c26b1-utilities\") pod \"4ca30cf8-9906-46b4-90e7-32a8624c26b1\" (UID: \"4ca30cf8-9906-46b4-90e7-32a8624c26b1\") " Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.657849 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5sx6\" (UniqueName: \"kubernetes.io/projected/4ca30cf8-9906-46b4-90e7-32a8624c26b1-kube-api-access-j5sx6\") pod \"4ca30cf8-9906-46b4-90e7-32a8624c26b1\" (UID: \"4ca30cf8-9906-46b4-90e7-32a8624c26b1\") " Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.657947 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ca30cf8-9906-46b4-90e7-32a8624c26b1-catalog-content\") pod \"4ca30cf8-9906-46b4-90e7-32a8624c26b1\" (UID: \"4ca30cf8-9906-46b4-90e7-32a8624c26b1\") " Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.668125 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ca30cf8-9906-46b4-90e7-32a8624c26b1-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.669184 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ca30cf8-9906-46b4-90e7-32a8624c26b1-kube-api-access-j5sx6" (OuterVolumeSpecName: "kube-api-access-j5sx6") pod "4ca30cf8-9906-46b4-90e7-32a8624c26b1" (UID: "4ca30cf8-9906-46b4-90e7-32a8624c26b1"). InnerVolumeSpecName "kube-api-access-j5sx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.756404 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ca30cf8-9906-46b4-90e7-32a8624c26b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ca30cf8-9906-46b4-90e7-32a8624c26b1" (UID: "4ca30cf8-9906-46b4-90e7-32a8624c26b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.771553 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5sx6\" (UniqueName: \"kubernetes.io/projected/4ca30cf8-9906-46b4-90e7-32a8624c26b1-kube-api-access-j5sx6\") on node \"crc\" DevicePath \"\"" Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.771629 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ca30cf8-9906-46b4-90e7-32a8624c26b1-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.861162 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxvm9" event={"ID":"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba","Type":"ContainerStarted","Data":"08f2548f8540afb5fb889236ed9e38d4b1c3cde082ca18a09ced39f4b221c0af"} Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.870265 4624 generic.go:334] "Generic (PLEG): container finished" podID="4ca30cf8-9906-46b4-90e7-32a8624c26b1" containerID="ce8a2b0588c2861e677f0d1af74cf3c5ac514c9c09311d71e5abe80c6bfd940b" exitCode=0 Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.870581 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkgl8" event={"ID":"4ca30cf8-9906-46b4-90e7-32a8624c26b1","Type":"ContainerDied","Data":"ce8a2b0588c2861e677f0d1af74cf3c5ac514c9c09311d71e5abe80c6bfd940b"} Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.870688 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkgl8" event={"ID":"4ca30cf8-9906-46b4-90e7-32a8624c26b1","Type":"ContainerDied","Data":"6054ffa4176b98b18988481def35115ac7e58a1f2032b7af673bb78e48064036"} Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.870719 4624 scope.go:117] "RemoveContainer" containerID="ce8a2b0588c2861e677f0d1af74cf3c5ac514c9c09311d71e5abe80c6bfd940b" Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.870965 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xkgl8" Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.891966 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nxvm9" podStartSLOduration=3.386112894 podStartE2EDuration="6.891929256s" podCreationTimestamp="2025-10-08 16:05:12 +0000 UTC" firstStartedPulling="2025-10-08 16:05:14.791864542 +0000 UTC m=+6139.942799619" lastFinishedPulling="2025-10-08 16:05:18.297680894 +0000 UTC m=+6143.448615981" observedRunningTime="2025-10-08 16:05:18.882585159 +0000 UTC m=+6144.033520236" watchObservedRunningTime="2025-10-08 16:05:18.891929256 +0000 UTC m=+6144.042864333" Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.907757 4624 scope.go:117] "RemoveContainer" containerID="ad0fa0dab182160f96cf2de4279cff0d883ebe7850b326b095bdd07df82611e5" Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.919861 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xkgl8"] Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.928488 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xkgl8"] Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.940105 4624 scope.go:117] "RemoveContainer" containerID="cfff15c696c260f8a2cb9b952bbfe005af7452ce94a3fb2e7b6a49603cfcaa6e" Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.982481 4624 scope.go:117] "RemoveContainer" containerID="ce8a2b0588c2861e677f0d1af74cf3c5ac514c9c09311d71e5abe80c6bfd940b" Oct 08 16:05:18 crc kubenswrapper[4624]: E1008 16:05:18.983513 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce8a2b0588c2861e677f0d1af74cf3c5ac514c9c09311d71e5abe80c6bfd940b\": container with ID starting with ce8a2b0588c2861e677f0d1af74cf3c5ac514c9c09311d71e5abe80c6bfd940b not found: ID does not exist" containerID="ce8a2b0588c2861e677f0d1af74cf3c5ac514c9c09311d71e5abe80c6bfd940b" Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.983556 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce8a2b0588c2861e677f0d1af74cf3c5ac514c9c09311d71e5abe80c6bfd940b"} err="failed to get container status \"ce8a2b0588c2861e677f0d1af74cf3c5ac514c9c09311d71e5abe80c6bfd940b\": rpc error: code = NotFound desc = could not find container \"ce8a2b0588c2861e677f0d1af74cf3c5ac514c9c09311d71e5abe80c6bfd940b\": container with ID starting with ce8a2b0588c2861e677f0d1af74cf3c5ac514c9c09311d71e5abe80c6bfd940b not found: ID does not exist" Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.983585 4624 scope.go:117] "RemoveContainer" containerID="ad0fa0dab182160f96cf2de4279cff0d883ebe7850b326b095bdd07df82611e5" Oct 08 16:05:18 crc kubenswrapper[4624]: E1008 16:05:18.984019 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad0fa0dab182160f96cf2de4279cff0d883ebe7850b326b095bdd07df82611e5\": container with ID starting with ad0fa0dab182160f96cf2de4279cff0d883ebe7850b326b095bdd07df82611e5 not found: ID does not exist" containerID="ad0fa0dab182160f96cf2de4279cff0d883ebe7850b326b095bdd07df82611e5" Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.984061 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad0fa0dab182160f96cf2de4279cff0d883ebe7850b326b095bdd07df82611e5"} err="failed to get container status \"ad0fa0dab182160f96cf2de4279cff0d883ebe7850b326b095bdd07df82611e5\": rpc error: code = NotFound desc = could not find container \"ad0fa0dab182160f96cf2de4279cff0d883ebe7850b326b095bdd07df82611e5\": container with ID starting with ad0fa0dab182160f96cf2de4279cff0d883ebe7850b326b095bdd07df82611e5 not found: ID does not exist" Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.984095 4624 scope.go:117] "RemoveContainer" containerID="cfff15c696c260f8a2cb9b952bbfe005af7452ce94a3fb2e7b6a49603cfcaa6e" Oct 08 16:05:18 crc kubenswrapper[4624]: E1008 16:05:18.984381 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfff15c696c260f8a2cb9b952bbfe005af7452ce94a3fb2e7b6a49603cfcaa6e\": container with ID starting with cfff15c696c260f8a2cb9b952bbfe005af7452ce94a3fb2e7b6a49603cfcaa6e not found: ID does not exist" containerID="cfff15c696c260f8a2cb9b952bbfe005af7452ce94a3fb2e7b6a49603cfcaa6e" Oct 08 16:05:18 crc kubenswrapper[4624]: I1008 16:05:18.984406 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfff15c696c260f8a2cb9b952bbfe005af7452ce94a3fb2e7b6a49603cfcaa6e"} err="failed to get container status \"cfff15c696c260f8a2cb9b952bbfe005af7452ce94a3fb2e7b6a49603cfcaa6e\": rpc error: code = NotFound desc = could not find container \"cfff15c696c260f8a2cb9b952bbfe005af7452ce94a3fb2e7b6a49603cfcaa6e\": container with ID starting with cfff15c696c260f8a2cb9b952bbfe005af7452ce94a3fb2e7b6a49603cfcaa6e not found: ID does not exist" Oct 08 16:05:19 crc kubenswrapper[4624]: I1008 16:05:19.481518 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ca30cf8-9906-46b4-90e7-32a8624c26b1" path="/var/lib/kubelet/pods/4ca30cf8-9906-46b4-90e7-32a8624c26b1/volumes" Oct 08 16:05:22 crc kubenswrapper[4624]: I1008 16:05:22.862068 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nxvm9" Oct 08 16:05:22 crc kubenswrapper[4624]: I1008 16:05:22.862752 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nxvm9" Oct 08 16:05:22 crc kubenswrapper[4624]: I1008 16:05:22.952079 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nxvm9" Oct 08 16:05:32 crc kubenswrapper[4624]: I1008 16:05:32.916540 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nxvm9" Oct 08 16:05:32 crc kubenswrapper[4624]: I1008 16:05:32.985001 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nxvm9"] Oct 08 16:05:33 crc kubenswrapper[4624]: I1008 16:05:33.062272 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nxvm9" podUID="7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba" containerName="registry-server" containerID="cri-o://08f2548f8540afb5fb889236ed9e38d4b1c3cde082ca18a09ced39f4b221c0af" gracePeriod=2 Oct 08 16:05:33 crc kubenswrapper[4624]: I1008 16:05:33.732390 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxvm9" Oct 08 16:05:33 crc kubenswrapper[4624]: I1008 16:05:33.848705 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba-catalog-content\") pod \"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba\" (UID: \"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba\") " Oct 08 16:05:33 crc kubenswrapper[4624]: I1008 16:05:33.848780 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65jhv\" (UniqueName: \"kubernetes.io/projected/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba-kube-api-access-65jhv\") pod \"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba\" (UID: \"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba\") " Oct 08 16:05:33 crc kubenswrapper[4624]: I1008 16:05:33.849263 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba-utilities\") pod \"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba\" (UID: \"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba\") " Oct 08 16:05:33 crc kubenswrapper[4624]: I1008 16:05:33.851442 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba-utilities" (OuterVolumeSpecName: "utilities") pod "7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba" (UID: "7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:05:33 crc kubenswrapper[4624]: I1008 16:05:33.865250 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba-kube-api-access-65jhv" (OuterVolumeSpecName: "kube-api-access-65jhv") pod "7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba" (UID: "7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba"). InnerVolumeSpecName "kube-api-access-65jhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:05:33 crc kubenswrapper[4624]: I1008 16:05:33.917570 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba" (UID: "7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:05:33 crc kubenswrapper[4624]: I1008 16:05:33.951980 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:05:33 crc kubenswrapper[4624]: I1008 16:05:33.952028 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:05:33 crc kubenswrapper[4624]: I1008 16:05:33.952045 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65jhv\" (UniqueName: \"kubernetes.io/projected/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba-kube-api-access-65jhv\") on node \"crc\" DevicePath \"\"" Oct 08 16:05:34 crc kubenswrapper[4624]: I1008 16:05:34.075748 4624 generic.go:334] "Generic (PLEG): container finished" podID="7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba" containerID="08f2548f8540afb5fb889236ed9e38d4b1c3cde082ca18a09ced39f4b221c0af" exitCode=0 Oct 08 16:05:34 crc kubenswrapper[4624]: I1008 16:05:34.075814 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxvm9" event={"ID":"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba","Type":"ContainerDied","Data":"08f2548f8540afb5fb889236ed9e38d4b1c3cde082ca18a09ced39f4b221c0af"} Oct 08 16:05:34 crc kubenswrapper[4624]: I1008 16:05:34.075844 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxvm9" Oct 08 16:05:34 crc kubenswrapper[4624]: I1008 16:05:34.075860 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxvm9" event={"ID":"7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba","Type":"ContainerDied","Data":"8742a63866faf33b7857e10c0718cdcaa2758b5ec71d0ae8d260430cf1a8c40d"} Oct 08 16:05:34 crc kubenswrapper[4624]: I1008 16:05:34.075888 4624 scope.go:117] "RemoveContainer" containerID="08f2548f8540afb5fb889236ed9e38d4b1c3cde082ca18a09ced39f4b221c0af" Oct 08 16:05:34 crc kubenswrapper[4624]: I1008 16:05:34.114334 4624 scope.go:117] "RemoveContainer" containerID="4df986242c177ec26773df46c8c15ac2c059b9eac4e7db26c0cbf8f95d6e3bc1" Oct 08 16:05:34 crc kubenswrapper[4624]: I1008 16:05:34.144409 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nxvm9"] Oct 08 16:05:34 crc kubenswrapper[4624]: I1008 16:05:34.168756 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nxvm9"] Oct 08 16:05:34 crc kubenswrapper[4624]: I1008 16:05:34.197995 4624 scope.go:117] "RemoveContainer" containerID="5c7af1ca32dce2f4005e2f3cd4f90ff7f0c18a2022cb96edbe6dc5195ebbd3b3" Oct 08 16:05:34 crc kubenswrapper[4624]: I1008 16:05:34.240861 4624 scope.go:117] "RemoveContainer" containerID="08f2548f8540afb5fb889236ed9e38d4b1c3cde082ca18a09ced39f4b221c0af" Oct 08 16:05:34 crc kubenswrapper[4624]: E1008 16:05:34.244341 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08f2548f8540afb5fb889236ed9e38d4b1c3cde082ca18a09ced39f4b221c0af\": container with ID starting with 08f2548f8540afb5fb889236ed9e38d4b1c3cde082ca18a09ced39f4b221c0af not found: ID does not exist" containerID="08f2548f8540afb5fb889236ed9e38d4b1c3cde082ca18a09ced39f4b221c0af" Oct 08 16:05:34 crc kubenswrapper[4624]: I1008 16:05:34.244413 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08f2548f8540afb5fb889236ed9e38d4b1c3cde082ca18a09ced39f4b221c0af"} err="failed to get container status \"08f2548f8540afb5fb889236ed9e38d4b1c3cde082ca18a09ced39f4b221c0af\": rpc error: code = NotFound desc = could not find container \"08f2548f8540afb5fb889236ed9e38d4b1c3cde082ca18a09ced39f4b221c0af\": container with ID starting with 08f2548f8540afb5fb889236ed9e38d4b1c3cde082ca18a09ced39f4b221c0af not found: ID does not exist" Oct 08 16:05:34 crc kubenswrapper[4624]: I1008 16:05:34.244674 4624 scope.go:117] "RemoveContainer" containerID="4df986242c177ec26773df46c8c15ac2c059b9eac4e7db26c0cbf8f95d6e3bc1" Oct 08 16:05:34 crc kubenswrapper[4624]: E1008 16:05:34.245431 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4df986242c177ec26773df46c8c15ac2c059b9eac4e7db26c0cbf8f95d6e3bc1\": container with ID starting with 4df986242c177ec26773df46c8c15ac2c059b9eac4e7db26c0cbf8f95d6e3bc1 not found: ID does not exist" containerID="4df986242c177ec26773df46c8c15ac2c059b9eac4e7db26c0cbf8f95d6e3bc1" Oct 08 16:05:34 crc kubenswrapper[4624]: I1008 16:05:34.245470 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df986242c177ec26773df46c8c15ac2c059b9eac4e7db26c0cbf8f95d6e3bc1"} err="failed to get container status \"4df986242c177ec26773df46c8c15ac2c059b9eac4e7db26c0cbf8f95d6e3bc1\": rpc error: code = NotFound desc = could not find container \"4df986242c177ec26773df46c8c15ac2c059b9eac4e7db26c0cbf8f95d6e3bc1\": container with ID starting with 4df986242c177ec26773df46c8c15ac2c059b9eac4e7db26c0cbf8f95d6e3bc1 not found: ID does not exist" Oct 08 16:05:34 crc kubenswrapper[4624]: I1008 16:05:34.245486 4624 scope.go:117] "RemoveContainer" containerID="5c7af1ca32dce2f4005e2f3cd4f90ff7f0c18a2022cb96edbe6dc5195ebbd3b3" Oct 08 16:05:34 crc kubenswrapper[4624]: E1008 16:05:34.246235 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c7af1ca32dce2f4005e2f3cd4f90ff7f0c18a2022cb96edbe6dc5195ebbd3b3\": container with ID starting with 5c7af1ca32dce2f4005e2f3cd4f90ff7f0c18a2022cb96edbe6dc5195ebbd3b3 not found: ID does not exist" containerID="5c7af1ca32dce2f4005e2f3cd4f90ff7f0c18a2022cb96edbe6dc5195ebbd3b3" Oct 08 16:05:34 crc kubenswrapper[4624]: I1008 16:05:34.246295 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c7af1ca32dce2f4005e2f3cd4f90ff7f0c18a2022cb96edbe6dc5195ebbd3b3"} err="failed to get container status \"5c7af1ca32dce2f4005e2f3cd4f90ff7f0c18a2022cb96edbe6dc5195ebbd3b3\": rpc error: code = NotFound desc = could not find container \"5c7af1ca32dce2f4005e2f3cd4f90ff7f0c18a2022cb96edbe6dc5195ebbd3b3\": container with ID starting with 5c7af1ca32dce2f4005e2f3cd4f90ff7f0c18a2022cb96edbe6dc5195ebbd3b3 not found: ID does not exist" Oct 08 16:05:35 crc kubenswrapper[4624]: I1008 16:05:35.482117 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba" path="/var/lib/kubelet/pods/7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba/volumes" Oct 08 16:06:00 crc kubenswrapper[4624]: I1008 16:06:00.076082 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:06:00 crc kubenswrapper[4624]: I1008 16:06:00.076840 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:06:30 crc kubenswrapper[4624]: I1008 16:06:30.076732 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:06:30 crc kubenswrapper[4624]: I1008 16:06:30.077686 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:07:00 crc kubenswrapper[4624]: I1008 16:07:00.076156 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:07:00 crc kubenswrapper[4624]: I1008 16:07:00.077092 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:07:00 crc kubenswrapper[4624]: I1008 16:07:00.077148 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 16:07:00 crc kubenswrapper[4624]: I1008 16:07:00.078022 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4dd7282757c330a044cf36d0a5cacc7696b29b92fdb83bd080e2547865f11d79"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 16:07:00 crc kubenswrapper[4624]: I1008 16:07:00.078082 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://4dd7282757c330a044cf36d0a5cacc7696b29b92fdb83bd080e2547865f11d79" gracePeriod=600 Oct 08 16:07:01 crc kubenswrapper[4624]: I1008 16:07:01.132340 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="4dd7282757c330a044cf36d0a5cacc7696b29b92fdb83bd080e2547865f11d79" exitCode=0 Oct 08 16:07:01 crc kubenswrapper[4624]: I1008 16:07:01.132427 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"4dd7282757c330a044cf36d0a5cacc7696b29b92fdb83bd080e2547865f11d79"} Oct 08 16:07:01 crc kubenswrapper[4624]: I1008 16:07:01.133131 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589"} Oct 08 16:07:01 crc kubenswrapper[4624]: I1008 16:07:01.133166 4624 scope.go:117] "RemoveContainer" containerID="2432e4ff2361a7e55345c153683a5bdecd876893235091690ce8f89bfd5147d8" Oct 08 16:09:00 crc kubenswrapper[4624]: I1008 16:09:00.076755 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:09:00 crc kubenswrapper[4624]: I1008 16:09:00.077334 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:09:03 crc kubenswrapper[4624]: I1008 16:09:03.960499 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nqtn5"] Oct 08 16:09:03 crc kubenswrapper[4624]: E1008 16:09:03.961794 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ca30cf8-9906-46b4-90e7-32a8624c26b1" containerName="extract-content" Oct 08 16:09:03 crc kubenswrapper[4624]: I1008 16:09:03.961818 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ca30cf8-9906-46b4-90e7-32a8624c26b1" containerName="extract-content" Oct 08 16:09:03 crc kubenswrapper[4624]: E1008 16:09:03.961885 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ca30cf8-9906-46b4-90e7-32a8624c26b1" containerName="registry-server" Oct 08 16:09:03 crc kubenswrapper[4624]: I1008 16:09:03.961897 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ca30cf8-9906-46b4-90e7-32a8624c26b1" containerName="registry-server" Oct 08 16:09:03 crc kubenswrapper[4624]: E1008 16:09:03.961911 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba" containerName="extract-content" Oct 08 16:09:03 crc kubenswrapper[4624]: I1008 16:09:03.961921 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba" containerName="extract-content" Oct 08 16:09:03 crc kubenswrapper[4624]: E1008 16:09:03.961970 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba" containerName="extract-utilities" Oct 08 16:09:03 crc kubenswrapper[4624]: I1008 16:09:03.961981 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba" containerName="extract-utilities" Oct 08 16:09:03 crc kubenswrapper[4624]: E1008 16:09:03.962003 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ca30cf8-9906-46b4-90e7-32a8624c26b1" containerName="extract-utilities" Oct 08 16:09:03 crc kubenswrapper[4624]: I1008 16:09:03.962013 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ca30cf8-9906-46b4-90e7-32a8624c26b1" containerName="extract-utilities" Oct 08 16:09:03 crc kubenswrapper[4624]: E1008 16:09:03.962046 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba" containerName="registry-server" Oct 08 16:09:03 crc kubenswrapper[4624]: I1008 16:09:03.962057 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba" containerName="registry-server" Oct 08 16:09:03 crc kubenswrapper[4624]: I1008 16:09:03.962378 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ca30cf8-9906-46b4-90e7-32a8624c26b1" containerName="registry-server" Oct 08 16:09:03 crc kubenswrapper[4624]: I1008 16:09:03.962418 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d0aa4b7-8968-40f6-84e9-0d9caf6c22ba" containerName="registry-server" Oct 08 16:09:03 crc kubenswrapper[4624]: I1008 16:09:03.964575 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nqtn5" Oct 08 16:09:03 crc kubenswrapper[4624]: I1008 16:09:03.971853 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqtn5"] Oct 08 16:09:04 crc kubenswrapper[4624]: I1008 16:09:04.051444 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2-catalog-content\") pod \"redhat-marketplace-nqtn5\" (UID: \"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2\") " pod="openshift-marketplace/redhat-marketplace-nqtn5" Oct 08 16:09:04 crc kubenswrapper[4624]: I1008 16:09:04.051541 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf56k\" (UniqueName: \"kubernetes.io/projected/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2-kube-api-access-kf56k\") pod \"redhat-marketplace-nqtn5\" (UID: \"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2\") " pod="openshift-marketplace/redhat-marketplace-nqtn5" Oct 08 16:09:04 crc kubenswrapper[4624]: I1008 16:09:04.051669 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2-utilities\") pod \"redhat-marketplace-nqtn5\" (UID: \"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2\") " pod="openshift-marketplace/redhat-marketplace-nqtn5" Oct 08 16:09:04 crc kubenswrapper[4624]: I1008 16:09:04.155095 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2-catalog-content\") pod \"redhat-marketplace-nqtn5\" (UID: \"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2\") " pod="openshift-marketplace/redhat-marketplace-nqtn5" Oct 08 16:09:04 crc kubenswrapper[4624]: I1008 16:09:04.155219 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf56k\" (UniqueName: \"kubernetes.io/projected/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2-kube-api-access-kf56k\") pod \"redhat-marketplace-nqtn5\" (UID: \"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2\") " pod="openshift-marketplace/redhat-marketplace-nqtn5" Oct 08 16:09:04 crc kubenswrapper[4624]: I1008 16:09:04.155324 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2-utilities\") pod \"redhat-marketplace-nqtn5\" (UID: \"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2\") " pod="openshift-marketplace/redhat-marketplace-nqtn5" Oct 08 16:09:04 crc kubenswrapper[4624]: I1008 16:09:04.155776 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2-catalog-content\") pod \"redhat-marketplace-nqtn5\" (UID: \"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2\") " pod="openshift-marketplace/redhat-marketplace-nqtn5" Oct 08 16:09:04 crc kubenswrapper[4624]: I1008 16:09:04.156060 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2-utilities\") pod \"redhat-marketplace-nqtn5\" (UID: \"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2\") " pod="openshift-marketplace/redhat-marketplace-nqtn5" Oct 08 16:09:04 crc kubenswrapper[4624]: I1008 16:09:04.182853 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf56k\" (UniqueName: \"kubernetes.io/projected/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2-kube-api-access-kf56k\") pod \"redhat-marketplace-nqtn5\" (UID: \"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2\") " pod="openshift-marketplace/redhat-marketplace-nqtn5" Oct 08 16:09:04 crc kubenswrapper[4624]: I1008 16:09:04.324068 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nqtn5" Oct 08 16:09:04 crc kubenswrapper[4624]: I1008 16:09:04.849338 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqtn5"] Oct 08 16:09:05 crc kubenswrapper[4624]: I1008 16:09:05.561016 4624 generic.go:334] "Generic (PLEG): container finished" podID="e5e55b90-6418-4fc6-b0f2-8ba568d3fad2" containerID="31bb1d6fdd4c58f4c30e2d34985fe3a63911af3bc5085b99c60f892de087c0cd" exitCode=0 Oct 08 16:09:05 crc kubenswrapper[4624]: I1008 16:09:05.561409 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqtn5" event={"ID":"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2","Type":"ContainerDied","Data":"31bb1d6fdd4c58f4c30e2d34985fe3a63911af3bc5085b99c60f892de087c0cd"} Oct 08 16:09:05 crc kubenswrapper[4624]: I1008 16:09:05.561448 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqtn5" event={"ID":"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2","Type":"ContainerStarted","Data":"608304f5fcd6b5fb746a9c843c65ce6f991c6af05f9817812259aba79c61ce22"} Oct 08 16:09:06 crc kubenswrapper[4624]: I1008 16:09:06.604556 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqtn5" event={"ID":"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2","Type":"ContainerStarted","Data":"efd08dfdb7db5b7d727a8e7c54313b2f30c888756300ce19c96bd96c4dac5abb"} Oct 08 16:09:07 crc kubenswrapper[4624]: I1008 16:09:07.618280 4624 generic.go:334] "Generic (PLEG): container finished" podID="e5e55b90-6418-4fc6-b0f2-8ba568d3fad2" containerID="efd08dfdb7db5b7d727a8e7c54313b2f30c888756300ce19c96bd96c4dac5abb" exitCode=0 Oct 08 16:09:07 crc kubenswrapper[4624]: I1008 16:09:07.618361 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqtn5" event={"ID":"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2","Type":"ContainerDied","Data":"efd08dfdb7db5b7d727a8e7c54313b2f30c888756300ce19c96bd96c4dac5abb"} Oct 08 16:09:09 crc kubenswrapper[4624]: I1008 16:09:09.643668 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqtn5" event={"ID":"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2","Type":"ContainerStarted","Data":"c158a4ac97a6fa532f57e53c09eae32fbaa10e6088110b93ee2a5e3db0a6bfe7"} Oct 08 16:09:14 crc kubenswrapper[4624]: I1008 16:09:14.324195 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nqtn5" Oct 08 16:09:14 crc kubenswrapper[4624]: I1008 16:09:14.324809 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nqtn5" Oct 08 16:09:14 crc kubenswrapper[4624]: I1008 16:09:14.376939 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nqtn5" Oct 08 16:09:14 crc kubenswrapper[4624]: I1008 16:09:14.415202 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nqtn5" podStartSLOduration=8.513971326 podStartE2EDuration="11.415162412s" podCreationTimestamp="2025-10-08 16:09:03 +0000 UTC" firstStartedPulling="2025-10-08 16:09:05.563707829 +0000 UTC m=+6370.714642906" lastFinishedPulling="2025-10-08 16:09:08.464898915 +0000 UTC m=+6373.615833992" observedRunningTime="2025-10-08 16:09:09.6767779 +0000 UTC m=+6374.827712977" watchObservedRunningTime="2025-10-08 16:09:14.415162412 +0000 UTC m=+6379.566097479" Oct 08 16:09:14 crc kubenswrapper[4624]: I1008 16:09:14.772580 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nqtn5" Oct 08 16:09:14 crc kubenswrapper[4624]: I1008 16:09:14.834933 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqtn5"] Oct 08 16:09:16 crc kubenswrapper[4624]: I1008 16:09:16.757315 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nqtn5" podUID="e5e55b90-6418-4fc6-b0f2-8ba568d3fad2" containerName="registry-server" containerID="cri-o://c158a4ac97a6fa532f57e53c09eae32fbaa10e6088110b93ee2a5e3db0a6bfe7" gracePeriod=2 Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.342933 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nqtn5" Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.464992 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2-utilities\") pod \"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2\" (UID: \"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2\") " Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.466465 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2-utilities" (OuterVolumeSpecName: "utilities") pod "e5e55b90-6418-4fc6-b0f2-8ba568d3fad2" (UID: "e5e55b90-6418-4fc6-b0f2-8ba568d3fad2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.466756 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2-catalog-content\") pod \"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2\" (UID: \"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2\") " Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.472952 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf56k\" (UniqueName: \"kubernetes.io/projected/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2-kube-api-access-kf56k\") pod \"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2\" (UID: \"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2\") " Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.473885 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.483793 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5e55b90-6418-4fc6-b0f2-8ba568d3fad2" (UID: "e5e55b90-6418-4fc6-b0f2-8ba568d3fad2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.484956 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2-kube-api-access-kf56k" (OuterVolumeSpecName: "kube-api-access-kf56k") pod "e5e55b90-6418-4fc6-b0f2-8ba568d3fad2" (UID: "e5e55b90-6418-4fc6-b0f2-8ba568d3fad2"). InnerVolumeSpecName "kube-api-access-kf56k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.576061 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.576108 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf56k\" (UniqueName: \"kubernetes.io/projected/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2-kube-api-access-kf56k\") on node \"crc\" DevicePath \"\"" Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.769632 4624 generic.go:334] "Generic (PLEG): container finished" podID="e5e55b90-6418-4fc6-b0f2-8ba568d3fad2" containerID="c158a4ac97a6fa532f57e53c09eae32fbaa10e6088110b93ee2a5e3db0a6bfe7" exitCode=0 Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.769697 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqtn5" event={"ID":"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2","Type":"ContainerDied","Data":"c158a4ac97a6fa532f57e53c09eae32fbaa10e6088110b93ee2a5e3db0a6bfe7"} Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.769730 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqtn5" event={"ID":"e5e55b90-6418-4fc6-b0f2-8ba568d3fad2","Type":"ContainerDied","Data":"608304f5fcd6b5fb746a9c843c65ce6f991c6af05f9817812259aba79c61ce22"} Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.769749 4624 scope.go:117] "RemoveContainer" containerID="c158a4ac97a6fa532f57e53c09eae32fbaa10e6088110b93ee2a5e3db0a6bfe7" Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.769944 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nqtn5" Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.811309 4624 scope.go:117] "RemoveContainer" containerID="efd08dfdb7db5b7d727a8e7c54313b2f30c888756300ce19c96bd96c4dac5abb" Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.822440 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqtn5"] Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.838089 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqtn5"] Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.847688 4624 scope.go:117] "RemoveContainer" containerID="31bb1d6fdd4c58f4c30e2d34985fe3a63911af3bc5085b99c60f892de087c0cd" Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.899179 4624 scope.go:117] "RemoveContainer" containerID="c158a4ac97a6fa532f57e53c09eae32fbaa10e6088110b93ee2a5e3db0a6bfe7" Oct 08 16:09:17 crc kubenswrapper[4624]: E1008 16:09:17.900682 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c158a4ac97a6fa532f57e53c09eae32fbaa10e6088110b93ee2a5e3db0a6bfe7\": container with ID starting with c158a4ac97a6fa532f57e53c09eae32fbaa10e6088110b93ee2a5e3db0a6bfe7 not found: ID does not exist" containerID="c158a4ac97a6fa532f57e53c09eae32fbaa10e6088110b93ee2a5e3db0a6bfe7" Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.900768 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c158a4ac97a6fa532f57e53c09eae32fbaa10e6088110b93ee2a5e3db0a6bfe7"} err="failed to get container status \"c158a4ac97a6fa532f57e53c09eae32fbaa10e6088110b93ee2a5e3db0a6bfe7\": rpc error: code = NotFound desc = could not find container \"c158a4ac97a6fa532f57e53c09eae32fbaa10e6088110b93ee2a5e3db0a6bfe7\": container with ID starting with c158a4ac97a6fa532f57e53c09eae32fbaa10e6088110b93ee2a5e3db0a6bfe7 not found: ID does not exist" Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.900811 4624 scope.go:117] "RemoveContainer" containerID="efd08dfdb7db5b7d727a8e7c54313b2f30c888756300ce19c96bd96c4dac5abb" Oct 08 16:09:17 crc kubenswrapper[4624]: E1008 16:09:17.901302 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efd08dfdb7db5b7d727a8e7c54313b2f30c888756300ce19c96bd96c4dac5abb\": container with ID starting with efd08dfdb7db5b7d727a8e7c54313b2f30c888756300ce19c96bd96c4dac5abb not found: ID does not exist" containerID="efd08dfdb7db5b7d727a8e7c54313b2f30c888756300ce19c96bd96c4dac5abb" Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.901357 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd08dfdb7db5b7d727a8e7c54313b2f30c888756300ce19c96bd96c4dac5abb"} err="failed to get container status \"efd08dfdb7db5b7d727a8e7c54313b2f30c888756300ce19c96bd96c4dac5abb\": rpc error: code = NotFound desc = could not find container \"efd08dfdb7db5b7d727a8e7c54313b2f30c888756300ce19c96bd96c4dac5abb\": container with ID starting with efd08dfdb7db5b7d727a8e7c54313b2f30c888756300ce19c96bd96c4dac5abb not found: ID does not exist" Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.901399 4624 scope.go:117] "RemoveContainer" containerID="31bb1d6fdd4c58f4c30e2d34985fe3a63911af3bc5085b99c60f892de087c0cd" Oct 08 16:09:17 crc kubenswrapper[4624]: E1008 16:09:17.901790 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31bb1d6fdd4c58f4c30e2d34985fe3a63911af3bc5085b99c60f892de087c0cd\": container with ID starting with 31bb1d6fdd4c58f4c30e2d34985fe3a63911af3bc5085b99c60f892de087c0cd not found: ID does not exist" containerID="31bb1d6fdd4c58f4c30e2d34985fe3a63911af3bc5085b99c60f892de087c0cd" Oct 08 16:09:17 crc kubenswrapper[4624]: I1008 16:09:17.901847 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31bb1d6fdd4c58f4c30e2d34985fe3a63911af3bc5085b99c60f892de087c0cd"} err="failed to get container status \"31bb1d6fdd4c58f4c30e2d34985fe3a63911af3bc5085b99c60f892de087c0cd\": rpc error: code = NotFound desc = could not find container \"31bb1d6fdd4c58f4c30e2d34985fe3a63911af3bc5085b99c60f892de087c0cd\": container with ID starting with 31bb1d6fdd4c58f4c30e2d34985fe3a63911af3bc5085b99c60f892de087c0cd not found: ID does not exist" Oct 08 16:09:19 crc kubenswrapper[4624]: I1008 16:09:19.485573 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5e55b90-6418-4fc6-b0f2-8ba568d3fad2" path="/var/lib/kubelet/pods/e5e55b90-6418-4fc6-b0f2-8ba568d3fad2/volumes" Oct 08 16:09:30 crc kubenswrapper[4624]: I1008 16:09:30.076534 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:09:30 crc kubenswrapper[4624]: I1008 16:09:30.077461 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:10:00 crc kubenswrapper[4624]: I1008 16:10:00.077098 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:10:00 crc kubenswrapper[4624]: I1008 16:10:00.078103 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:10:00 crc kubenswrapper[4624]: I1008 16:10:00.078211 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 16:10:00 crc kubenswrapper[4624]: I1008 16:10:00.079954 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 16:10:00 crc kubenswrapper[4624]: I1008 16:10:00.080078 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" gracePeriod=600 Oct 08 16:10:00 crc kubenswrapper[4624]: E1008 16:10:00.203265 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:10:00 crc kubenswrapper[4624]: I1008 16:10:00.258329 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" exitCode=0 Oct 08 16:10:00 crc kubenswrapper[4624]: I1008 16:10:00.258390 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589"} Oct 08 16:10:00 crc kubenswrapper[4624]: I1008 16:10:00.258438 4624 scope.go:117] "RemoveContainer" containerID="4dd7282757c330a044cf36d0a5cacc7696b29b92fdb83bd080e2547865f11d79" Oct 08 16:10:00 crc kubenswrapper[4624]: I1008 16:10:00.260535 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:10:00 crc kubenswrapper[4624]: E1008 16:10:00.261198 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:10:15 crc kubenswrapper[4624]: I1008 16:10:15.472819 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:10:15 crc kubenswrapper[4624]: E1008 16:10:15.474272 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:10:26 crc kubenswrapper[4624]: I1008 16:10:26.466691 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:10:26 crc kubenswrapper[4624]: E1008 16:10:26.467657 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:10:40 crc kubenswrapper[4624]: I1008 16:10:40.466937 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:10:40 crc kubenswrapper[4624]: E1008 16:10:40.467820 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:10:55 crc kubenswrapper[4624]: I1008 16:10:55.484256 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:10:55 crc kubenswrapper[4624]: E1008 16:10:55.485635 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:11:10 crc kubenswrapper[4624]: I1008 16:11:10.466718 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:11:10 crc kubenswrapper[4624]: E1008 16:11:10.467674 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:11:24 crc kubenswrapper[4624]: I1008 16:11:24.466462 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:11:24 crc kubenswrapper[4624]: E1008 16:11:24.467286 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:11:38 crc kubenswrapper[4624]: I1008 16:11:38.466271 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:11:38 crc kubenswrapper[4624]: E1008 16:11:38.467169 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:11:51 crc kubenswrapper[4624]: I1008 16:11:51.466288 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:11:51 crc kubenswrapper[4624]: E1008 16:11:51.467222 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:12:06 crc kubenswrapper[4624]: I1008 16:12:06.466471 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:12:06 crc kubenswrapper[4624]: E1008 16:12:06.467475 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:12:21 crc kubenswrapper[4624]: I1008 16:12:21.466925 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:12:21 crc kubenswrapper[4624]: E1008 16:12:21.468076 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:12:35 crc kubenswrapper[4624]: I1008 16:12:35.480946 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:12:35 crc kubenswrapper[4624]: E1008 16:12:35.481760 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:12:47 crc kubenswrapper[4624]: I1008 16:12:47.466430 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:12:47 crc kubenswrapper[4624]: E1008 16:12:47.468190 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:12:49 crc kubenswrapper[4624]: I1008 16:12:49.085006 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rmxbk"] Oct 08 16:12:49 crc kubenswrapper[4624]: E1008 16:12:49.085694 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e55b90-6418-4fc6-b0f2-8ba568d3fad2" containerName="extract-utilities" Oct 08 16:12:49 crc kubenswrapper[4624]: I1008 16:12:49.085723 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e55b90-6418-4fc6-b0f2-8ba568d3fad2" containerName="extract-utilities" Oct 08 16:12:49 crc kubenswrapper[4624]: E1008 16:12:49.085750 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e55b90-6418-4fc6-b0f2-8ba568d3fad2" containerName="registry-server" Oct 08 16:12:49 crc kubenswrapper[4624]: I1008 16:12:49.085759 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e55b90-6418-4fc6-b0f2-8ba568d3fad2" containerName="registry-server" Oct 08 16:12:49 crc kubenswrapper[4624]: E1008 16:12:49.085788 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e55b90-6418-4fc6-b0f2-8ba568d3fad2" containerName="extract-content" Oct 08 16:12:49 crc kubenswrapper[4624]: I1008 16:12:49.085820 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e55b90-6418-4fc6-b0f2-8ba568d3fad2" containerName="extract-content" Oct 08 16:12:49 crc kubenswrapper[4624]: I1008 16:12:49.086080 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e55b90-6418-4fc6-b0f2-8ba568d3fad2" containerName="registry-server" Oct 08 16:12:49 crc kubenswrapper[4624]: I1008 16:12:49.088322 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmxbk" Oct 08 16:12:49 crc kubenswrapper[4624]: I1008 16:12:49.101034 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rmxbk"] Oct 08 16:12:49 crc kubenswrapper[4624]: I1008 16:12:49.202558 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134b018c-3967-4dc8-a3cc-c93968a2e0a6-catalog-content\") pod \"community-operators-rmxbk\" (UID: \"134b018c-3967-4dc8-a3cc-c93968a2e0a6\") " pod="openshift-marketplace/community-operators-rmxbk" Oct 08 16:12:49 crc kubenswrapper[4624]: I1008 16:12:49.202868 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134b018c-3967-4dc8-a3cc-c93968a2e0a6-utilities\") pod \"community-operators-rmxbk\" (UID: \"134b018c-3967-4dc8-a3cc-c93968a2e0a6\") " pod="openshift-marketplace/community-operators-rmxbk" Oct 08 16:12:49 crc kubenswrapper[4624]: I1008 16:12:49.203289 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb7tc\" (UniqueName: \"kubernetes.io/projected/134b018c-3967-4dc8-a3cc-c93968a2e0a6-kube-api-access-cb7tc\") pod \"community-operators-rmxbk\" (UID: \"134b018c-3967-4dc8-a3cc-c93968a2e0a6\") " pod="openshift-marketplace/community-operators-rmxbk" Oct 08 16:12:49 crc kubenswrapper[4624]: I1008 16:12:49.305655 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134b018c-3967-4dc8-a3cc-c93968a2e0a6-utilities\") pod \"community-operators-rmxbk\" (UID: \"134b018c-3967-4dc8-a3cc-c93968a2e0a6\") " pod="openshift-marketplace/community-operators-rmxbk" Oct 08 16:12:49 crc kubenswrapper[4624]: I1008 16:12:49.305816 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb7tc\" (UniqueName: \"kubernetes.io/projected/134b018c-3967-4dc8-a3cc-c93968a2e0a6-kube-api-access-cb7tc\") pod \"community-operators-rmxbk\" (UID: \"134b018c-3967-4dc8-a3cc-c93968a2e0a6\") " pod="openshift-marketplace/community-operators-rmxbk" Oct 08 16:12:49 crc kubenswrapper[4624]: I1008 16:12:49.306264 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134b018c-3967-4dc8-a3cc-c93968a2e0a6-catalog-content\") pod \"community-operators-rmxbk\" (UID: \"134b018c-3967-4dc8-a3cc-c93968a2e0a6\") " pod="openshift-marketplace/community-operators-rmxbk" Oct 08 16:12:49 crc kubenswrapper[4624]: I1008 16:12:49.306303 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134b018c-3967-4dc8-a3cc-c93968a2e0a6-utilities\") pod \"community-operators-rmxbk\" (UID: \"134b018c-3967-4dc8-a3cc-c93968a2e0a6\") " pod="openshift-marketplace/community-operators-rmxbk" Oct 08 16:12:49 crc kubenswrapper[4624]: I1008 16:12:49.306863 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134b018c-3967-4dc8-a3cc-c93968a2e0a6-catalog-content\") pod \"community-operators-rmxbk\" (UID: \"134b018c-3967-4dc8-a3cc-c93968a2e0a6\") " pod="openshift-marketplace/community-operators-rmxbk" Oct 08 16:12:49 crc kubenswrapper[4624]: I1008 16:12:49.333215 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb7tc\" (UniqueName: \"kubernetes.io/projected/134b018c-3967-4dc8-a3cc-c93968a2e0a6-kube-api-access-cb7tc\") pod \"community-operators-rmxbk\" (UID: \"134b018c-3967-4dc8-a3cc-c93968a2e0a6\") " pod="openshift-marketplace/community-operators-rmxbk" Oct 08 16:12:49 crc kubenswrapper[4624]: I1008 16:12:49.414996 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmxbk" Oct 08 16:12:50 crc kubenswrapper[4624]: I1008 16:12:50.149341 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rmxbk"] Oct 08 16:12:51 crc kubenswrapper[4624]: I1008 16:12:51.164994 4624 generic.go:334] "Generic (PLEG): container finished" podID="134b018c-3967-4dc8-a3cc-c93968a2e0a6" containerID="11275b8480a0159fee4d14d86b378699703b0e11afd27e50505012ffb4b68813" exitCode=0 Oct 08 16:12:51 crc kubenswrapper[4624]: I1008 16:12:51.165028 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxbk" event={"ID":"134b018c-3967-4dc8-a3cc-c93968a2e0a6","Type":"ContainerDied","Data":"11275b8480a0159fee4d14d86b378699703b0e11afd27e50505012ffb4b68813"} Oct 08 16:12:51 crc kubenswrapper[4624]: I1008 16:12:51.165406 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxbk" event={"ID":"134b018c-3967-4dc8-a3cc-c93968a2e0a6","Type":"ContainerStarted","Data":"750ed73c06b8c434f5bfd80fc0a141a0b0df2dc0a64facb06f7e749cf5bacbd0"} Oct 08 16:12:51 crc kubenswrapper[4624]: I1008 16:12:51.167620 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 16:12:53 crc kubenswrapper[4624]: I1008 16:12:53.190212 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxbk" event={"ID":"134b018c-3967-4dc8-a3cc-c93968a2e0a6","Type":"ContainerStarted","Data":"d29353e18e2fef548f0e961c56fa2e6a0728f98a4c780d8ecd56e95d4caaf8e5"} Oct 08 16:12:54 crc kubenswrapper[4624]: I1008 16:12:54.207433 4624 generic.go:334] "Generic (PLEG): container finished" podID="134b018c-3967-4dc8-a3cc-c93968a2e0a6" containerID="d29353e18e2fef548f0e961c56fa2e6a0728f98a4c780d8ecd56e95d4caaf8e5" exitCode=0 Oct 08 16:12:54 crc kubenswrapper[4624]: I1008 16:12:54.207523 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxbk" event={"ID":"134b018c-3967-4dc8-a3cc-c93968a2e0a6","Type":"ContainerDied","Data":"d29353e18e2fef548f0e961c56fa2e6a0728f98a4c780d8ecd56e95d4caaf8e5"} Oct 08 16:12:56 crc kubenswrapper[4624]: I1008 16:12:56.237305 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxbk" event={"ID":"134b018c-3967-4dc8-a3cc-c93968a2e0a6","Type":"ContainerStarted","Data":"f93faab2720fff4a7c3d3373311de4facb9bd6d53ba119dbdf722f8db762540d"} Oct 08 16:12:56 crc kubenswrapper[4624]: I1008 16:12:56.279953 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rmxbk" podStartSLOduration=3.465071328 podStartE2EDuration="7.279917157s" podCreationTimestamp="2025-10-08 16:12:49 +0000 UTC" firstStartedPulling="2025-10-08 16:12:51.167319324 +0000 UTC m=+6596.318254411" lastFinishedPulling="2025-10-08 16:12:54.982165143 +0000 UTC m=+6600.133100240" observedRunningTime="2025-10-08 16:12:56.260753003 +0000 UTC m=+6601.411688090" watchObservedRunningTime="2025-10-08 16:12:56.279917157 +0000 UTC m=+6601.430852234" Oct 08 16:12:59 crc kubenswrapper[4624]: I1008 16:12:59.415124 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rmxbk" Oct 08 16:12:59 crc kubenswrapper[4624]: I1008 16:12:59.415484 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rmxbk" Oct 08 16:13:00 crc kubenswrapper[4624]: I1008 16:13:00.471291 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rmxbk" podUID="134b018c-3967-4dc8-a3cc-c93968a2e0a6" containerName="registry-server" probeResult="failure" output=< Oct 08 16:13:00 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:13:00 crc kubenswrapper[4624]: > Oct 08 16:13:02 crc kubenswrapper[4624]: I1008 16:13:02.466265 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:13:02 crc kubenswrapper[4624]: E1008 16:13:02.466810 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:13:09 crc kubenswrapper[4624]: I1008 16:13:09.485195 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rmxbk" Oct 08 16:13:09 crc kubenswrapper[4624]: I1008 16:13:09.557868 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rmxbk" Oct 08 16:13:09 crc kubenswrapper[4624]: I1008 16:13:09.727672 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rmxbk"] Oct 08 16:13:11 crc kubenswrapper[4624]: I1008 16:13:11.413429 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rmxbk" podUID="134b018c-3967-4dc8-a3cc-c93968a2e0a6" containerName="registry-server" containerID="cri-o://f93faab2720fff4a7c3d3373311de4facb9bd6d53ba119dbdf722f8db762540d" gracePeriod=2 Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.059966 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmxbk" Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.114417 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134b018c-3967-4dc8-a3cc-c93968a2e0a6-utilities\") pod \"134b018c-3967-4dc8-a3cc-c93968a2e0a6\" (UID: \"134b018c-3967-4dc8-a3cc-c93968a2e0a6\") " Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.114618 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134b018c-3967-4dc8-a3cc-c93968a2e0a6-catalog-content\") pod \"134b018c-3967-4dc8-a3cc-c93968a2e0a6\" (UID: \"134b018c-3967-4dc8-a3cc-c93968a2e0a6\") " Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.116330 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/134b018c-3967-4dc8-a3cc-c93968a2e0a6-utilities" (OuterVolumeSpecName: "utilities") pod "134b018c-3967-4dc8-a3cc-c93968a2e0a6" (UID: "134b018c-3967-4dc8-a3cc-c93968a2e0a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.188813 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/134b018c-3967-4dc8-a3cc-c93968a2e0a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "134b018c-3967-4dc8-a3cc-c93968a2e0a6" (UID: "134b018c-3967-4dc8-a3cc-c93968a2e0a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.216127 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb7tc\" (UniqueName: \"kubernetes.io/projected/134b018c-3967-4dc8-a3cc-c93968a2e0a6-kube-api-access-cb7tc\") pod \"134b018c-3967-4dc8-a3cc-c93968a2e0a6\" (UID: \"134b018c-3967-4dc8-a3cc-c93968a2e0a6\") " Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.216764 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134b018c-3967-4dc8-a3cc-c93968a2e0a6-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.216789 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134b018c-3967-4dc8-a3cc-c93968a2e0a6-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.229701 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/134b018c-3967-4dc8-a3cc-c93968a2e0a6-kube-api-access-cb7tc" (OuterVolumeSpecName: "kube-api-access-cb7tc") pod "134b018c-3967-4dc8-a3cc-c93968a2e0a6" (UID: "134b018c-3967-4dc8-a3cc-c93968a2e0a6"). InnerVolumeSpecName "kube-api-access-cb7tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.318870 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb7tc\" (UniqueName: \"kubernetes.io/projected/134b018c-3967-4dc8-a3cc-c93968a2e0a6-kube-api-access-cb7tc\") on node \"crc\" DevicePath \"\"" Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.428528 4624 generic.go:334] "Generic (PLEG): container finished" podID="134b018c-3967-4dc8-a3cc-c93968a2e0a6" containerID="f93faab2720fff4a7c3d3373311de4facb9bd6d53ba119dbdf722f8db762540d" exitCode=0 Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.428628 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxbk" event={"ID":"134b018c-3967-4dc8-a3cc-c93968a2e0a6","Type":"ContainerDied","Data":"f93faab2720fff4a7c3d3373311de4facb9bd6d53ba119dbdf722f8db762540d"} Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.428693 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmxbk" Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.428735 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmxbk" event={"ID":"134b018c-3967-4dc8-a3cc-c93968a2e0a6","Type":"ContainerDied","Data":"750ed73c06b8c434f5bfd80fc0a141a0b0df2dc0a64facb06f7e749cf5bacbd0"} Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.428768 4624 scope.go:117] "RemoveContainer" containerID="f93faab2720fff4a7c3d3373311de4facb9bd6d53ba119dbdf722f8db762540d" Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.459006 4624 scope.go:117] "RemoveContainer" containerID="d29353e18e2fef548f0e961c56fa2e6a0728f98a4c780d8ecd56e95d4caaf8e5" Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.489312 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rmxbk"] Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.491564 4624 scope.go:117] "RemoveContainer" containerID="11275b8480a0159fee4d14d86b378699703b0e11afd27e50505012ffb4b68813" Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.505800 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rmxbk"] Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.557503 4624 scope.go:117] "RemoveContainer" containerID="f93faab2720fff4a7c3d3373311de4facb9bd6d53ba119dbdf722f8db762540d" Oct 08 16:13:12 crc kubenswrapper[4624]: E1008 16:13:12.558083 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f93faab2720fff4a7c3d3373311de4facb9bd6d53ba119dbdf722f8db762540d\": container with ID starting with f93faab2720fff4a7c3d3373311de4facb9bd6d53ba119dbdf722f8db762540d not found: ID does not exist" containerID="f93faab2720fff4a7c3d3373311de4facb9bd6d53ba119dbdf722f8db762540d" Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.558116 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f93faab2720fff4a7c3d3373311de4facb9bd6d53ba119dbdf722f8db762540d"} err="failed to get container status \"f93faab2720fff4a7c3d3373311de4facb9bd6d53ba119dbdf722f8db762540d\": rpc error: code = NotFound desc = could not find container \"f93faab2720fff4a7c3d3373311de4facb9bd6d53ba119dbdf722f8db762540d\": container with ID starting with f93faab2720fff4a7c3d3373311de4facb9bd6d53ba119dbdf722f8db762540d not found: ID does not exist" Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.558140 4624 scope.go:117] "RemoveContainer" containerID="d29353e18e2fef548f0e961c56fa2e6a0728f98a4c780d8ecd56e95d4caaf8e5" Oct 08 16:13:12 crc kubenswrapper[4624]: E1008 16:13:12.559300 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d29353e18e2fef548f0e961c56fa2e6a0728f98a4c780d8ecd56e95d4caaf8e5\": container with ID starting with d29353e18e2fef548f0e961c56fa2e6a0728f98a4c780d8ecd56e95d4caaf8e5 not found: ID does not exist" containerID="d29353e18e2fef548f0e961c56fa2e6a0728f98a4c780d8ecd56e95d4caaf8e5" Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.559325 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d29353e18e2fef548f0e961c56fa2e6a0728f98a4c780d8ecd56e95d4caaf8e5"} err="failed to get container status \"d29353e18e2fef548f0e961c56fa2e6a0728f98a4c780d8ecd56e95d4caaf8e5\": rpc error: code = NotFound desc = could not find container \"d29353e18e2fef548f0e961c56fa2e6a0728f98a4c780d8ecd56e95d4caaf8e5\": container with ID starting with d29353e18e2fef548f0e961c56fa2e6a0728f98a4c780d8ecd56e95d4caaf8e5 not found: ID does not exist" Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.559342 4624 scope.go:117] "RemoveContainer" containerID="11275b8480a0159fee4d14d86b378699703b0e11afd27e50505012ffb4b68813" Oct 08 16:13:12 crc kubenswrapper[4624]: E1008 16:13:12.559692 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11275b8480a0159fee4d14d86b378699703b0e11afd27e50505012ffb4b68813\": container with ID starting with 11275b8480a0159fee4d14d86b378699703b0e11afd27e50505012ffb4b68813 not found: ID does not exist" containerID="11275b8480a0159fee4d14d86b378699703b0e11afd27e50505012ffb4b68813" Oct 08 16:13:12 crc kubenswrapper[4624]: I1008 16:13:12.559719 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11275b8480a0159fee4d14d86b378699703b0e11afd27e50505012ffb4b68813"} err="failed to get container status \"11275b8480a0159fee4d14d86b378699703b0e11afd27e50505012ffb4b68813\": rpc error: code = NotFound desc = could not find container \"11275b8480a0159fee4d14d86b378699703b0e11afd27e50505012ffb4b68813\": container with ID starting with 11275b8480a0159fee4d14d86b378699703b0e11afd27e50505012ffb4b68813 not found: ID does not exist" Oct 08 16:13:13 crc kubenswrapper[4624]: I1008 16:13:13.482971 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="134b018c-3967-4dc8-a3cc-c93968a2e0a6" path="/var/lib/kubelet/pods/134b018c-3967-4dc8-a3cc-c93968a2e0a6/volumes" Oct 08 16:13:15 crc kubenswrapper[4624]: I1008 16:13:15.476013 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:13:15 crc kubenswrapper[4624]: E1008 16:13:15.476938 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:13:27 crc kubenswrapper[4624]: I1008 16:13:27.466912 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:13:27 crc kubenswrapper[4624]: E1008 16:13:27.467848 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:13:40 crc kubenswrapper[4624]: I1008 16:13:40.466974 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:13:40 crc kubenswrapper[4624]: E1008 16:13:40.468036 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:13:54 crc kubenswrapper[4624]: I1008 16:13:54.466220 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:13:54 crc kubenswrapper[4624]: E1008 16:13:54.467065 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:14:05 crc kubenswrapper[4624]: I1008 16:14:05.474352 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:14:05 crc kubenswrapper[4624]: E1008 16:14:05.476138 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:14:18 crc kubenswrapper[4624]: I1008 16:14:18.466753 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:14:18 crc kubenswrapper[4624]: E1008 16:14:18.467541 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:14:33 crc kubenswrapper[4624]: I1008 16:14:33.466889 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:14:33 crc kubenswrapper[4624]: E1008 16:14:33.467841 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:14:46 crc kubenswrapper[4624]: I1008 16:14:46.466368 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:14:46 crc kubenswrapper[4624]: E1008 16:14:46.468960 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:14:57 crc kubenswrapper[4624]: I1008 16:14:57.466476 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:14:57 crc kubenswrapper[4624]: E1008 16:14:57.467242 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.258820 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz"] Oct 08 16:15:00 crc kubenswrapper[4624]: E1008 16:15:00.262200 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134b018c-3967-4dc8-a3cc-c93968a2e0a6" containerName="extract-utilities" Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.262441 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="134b018c-3967-4dc8-a3cc-c93968a2e0a6" containerName="extract-utilities" Oct 08 16:15:00 crc kubenswrapper[4624]: E1008 16:15:00.262537 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134b018c-3967-4dc8-a3cc-c93968a2e0a6" containerName="extract-content" Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.262595 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="134b018c-3967-4dc8-a3cc-c93968a2e0a6" containerName="extract-content" Oct 08 16:15:00 crc kubenswrapper[4624]: E1008 16:15:00.262711 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134b018c-3967-4dc8-a3cc-c93968a2e0a6" containerName="registry-server" Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.262798 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="134b018c-3967-4dc8-a3cc-c93968a2e0a6" containerName="registry-server" Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.263112 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="134b018c-3967-4dc8-a3cc-c93968a2e0a6" containerName="registry-server" Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.264291 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz" Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.274163 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz"] Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.312665 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.312767 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.361336 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccntx\" (UniqueName: \"kubernetes.io/projected/c3d66eab-4886-4f59-aacb-c126f01f0f05-kube-api-access-ccntx\") pod \"collect-profiles-29332335-k8hxz\" (UID: \"c3d66eab-4886-4f59-aacb-c126f01f0f05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz" Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.362727 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3d66eab-4886-4f59-aacb-c126f01f0f05-config-volume\") pod \"collect-profiles-29332335-k8hxz\" (UID: \"c3d66eab-4886-4f59-aacb-c126f01f0f05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz" Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.362911 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3d66eab-4886-4f59-aacb-c126f01f0f05-secret-volume\") pod \"collect-profiles-29332335-k8hxz\" (UID: \"c3d66eab-4886-4f59-aacb-c126f01f0f05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz" Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.465501 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3d66eab-4886-4f59-aacb-c126f01f0f05-config-volume\") pod \"collect-profiles-29332335-k8hxz\" (UID: \"c3d66eab-4886-4f59-aacb-c126f01f0f05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz" Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.465574 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3d66eab-4886-4f59-aacb-c126f01f0f05-secret-volume\") pod \"collect-profiles-29332335-k8hxz\" (UID: \"c3d66eab-4886-4f59-aacb-c126f01f0f05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz" Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.465680 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccntx\" (UniqueName: \"kubernetes.io/projected/c3d66eab-4886-4f59-aacb-c126f01f0f05-kube-api-access-ccntx\") pod \"collect-profiles-29332335-k8hxz\" (UID: \"c3d66eab-4886-4f59-aacb-c126f01f0f05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz" Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.466747 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3d66eab-4886-4f59-aacb-c126f01f0f05-config-volume\") pod \"collect-profiles-29332335-k8hxz\" (UID: \"c3d66eab-4886-4f59-aacb-c126f01f0f05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz" Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.475140 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3d66eab-4886-4f59-aacb-c126f01f0f05-secret-volume\") pod \"collect-profiles-29332335-k8hxz\" (UID: \"c3d66eab-4886-4f59-aacb-c126f01f0f05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz" Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.486098 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccntx\" (UniqueName: \"kubernetes.io/projected/c3d66eab-4886-4f59-aacb-c126f01f0f05-kube-api-access-ccntx\") pod \"collect-profiles-29332335-k8hxz\" (UID: \"c3d66eab-4886-4f59-aacb-c126f01f0f05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz" Oct 08 16:15:00 crc kubenswrapper[4624]: I1008 16:15:00.615183 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz" Oct 08 16:15:01 crc kubenswrapper[4624]: I1008 16:15:01.185107 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz"] Oct 08 16:15:01 crc kubenswrapper[4624]: I1008 16:15:01.643613 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz" event={"ID":"c3d66eab-4886-4f59-aacb-c126f01f0f05","Type":"ContainerStarted","Data":"3879c7cdb413a19a7e85cdda953794ce55db87e863b96b79aead34aba9174f2b"} Oct 08 16:15:01 crc kubenswrapper[4624]: I1008 16:15:01.644056 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz" event={"ID":"c3d66eab-4886-4f59-aacb-c126f01f0f05","Type":"ContainerStarted","Data":"3c46ebdfc8e1f03d270208f68c3e5e80dcc070e540f31ff4d9822db565be3c12"} Oct 08 16:15:01 crc kubenswrapper[4624]: I1008 16:15:01.661777 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz" podStartSLOduration=1.6617478239999999 podStartE2EDuration="1.661747824s" podCreationTimestamp="2025-10-08 16:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 16:15:01.658560084 +0000 UTC m=+6726.809495161" watchObservedRunningTime="2025-10-08 16:15:01.661747824 +0000 UTC m=+6726.812682911" Oct 08 16:15:03 crc kubenswrapper[4624]: I1008 16:15:03.667903 4624 generic.go:334] "Generic (PLEG): container finished" podID="c3d66eab-4886-4f59-aacb-c126f01f0f05" containerID="3879c7cdb413a19a7e85cdda953794ce55db87e863b96b79aead34aba9174f2b" exitCode=0 Oct 08 16:15:03 crc kubenswrapper[4624]: I1008 16:15:03.668432 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz" event={"ID":"c3d66eab-4886-4f59-aacb-c126f01f0f05","Type":"ContainerDied","Data":"3879c7cdb413a19a7e85cdda953794ce55db87e863b96b79aead34aba9174f2b"} Oct 08 16:15:05 crc kubenswrapper[4624]: I1008 16:15:05.174390 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz" Oct 08 16:15:05 crc kubenswrapper[4624]: I1008 16:15:05.286266 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3d66eab-4886-4f59-aacb-c126f01f0f05-config-volume\") pod \"c3d66eab-4886-4f59-aacb-c126f01f0f05\" (UID: \"c3d66eab-4886-4f59-aacb-c126f01f0f05\") " Oct 08 16:15:05 crc kubenswrapper[4624]: I1008 16:15:05.286334 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccntx\" (UniqueName: \"kubernetes.io/projected/c3d66eab-4886-4f59-aacb-c126f01f0f05-kube-api-access-ccntx\") pod \"c3d66eab-4886-4f59-aacb-c126f01f0f05\" (UID: \"c3d66eab-4886-4f59-aacb-c126f01f0f05\") " Oct 08 16:15:05 crc kubenswrapper[4624]: I1008 16:15:05.286441 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3d66eab-4886-4f59-aacb-c126f01f0f05-secret-volume\") pod \"c3d66eab-4886-4f59-aacb-c126f01f0f05\" (UID: \"c3d66eab-4886-4f59-aacb-c126f01f0f05\") " Oct 08 16:15:05 crc kubenswrapper[4624]: I1008 16:15:05.288372 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3d66eab-4886-4f59-aacb-c126f01f0f05-config-volume" (OuterVolumeSpecName: "config-volume") pod "c3d66eab-4886-4f59-aacb-c126f01f0f05" (UID: "c3d66eab-4886-4f59-aacb-c126f01f0f05"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 16:15:05 crc kubenswrapper[4624]: I1008 16:15:05.295154 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3d66eab-4886-4f59-aacb-c126f01f0f05-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c3d66eab-4886-4f59-aacb-c126f01f0f05" (UID: "c3d66eab-4886-4f59-aacb-c126f01f0f05"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:15:05 crc kubenswrapper[4624]: I1008 16:15:05.297353 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3d66eab-4886-4f59-aacb-c126f01f0f05-kube-api-access-ccntx" (OuterVolumeSpecName: "kube-api-access-ccntx") pod "c3d66eab-4886-4f59-aacb-c126f01f0f05" (UID: "c3d66eab-4886-4f59-aacb-c126f01f0f05"). InnerVolumeSpecName "kube-api-access-ccntx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:15:05 crc kubenswrapper[4624]: I1008 16:15:05.389165 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3d66eab-4886-4f59-aacb-c126f01f0f05-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 16:15:05 crc kubenswrapper[4624]: I1008 16:15:05.389236 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccntx\" (UniqueName: \"kubernetes.io/projected/c3d66eab-4886-4f59-aacb-c126f01f0f05-kube-api-access-ccntx\") on node \"crc\" DevicePath \"\"" Oct 08 16:15:05 crc kubenswrapper[4624]: I1008 16:15:05.389254 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3d66eab-4886-4f59-aacb-c126f01f0f05-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 16:15:05 crc kubenswrapper[4624]: I1008 16:15:05.707928 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz" event={"ID":"c3d66eab-4886-4f59-aacb-c126f01f0f05","Type":"ContainerDied","Data":"3c46ebdfc8e1f03d270208f68c3e5e80dcc070e540f31ff4d9822db565be3c12"} Oct 08 16:15:05 crc kubenswrapper[4624]: I1008 16:15:05.708129 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz" Oct 08 16:15:05 crc kubenswrapper[4624]: I1008 16:15:05.708359 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c46ebdfc8e1f03d270208f68c3e5e80dcc070e540f31ff4d9822db565be3c12" Oct 08 16:15:05 crc kubenswrapper[4624]: I1008 16:15:05.784993 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7"] Oct 08 16:15:05 crc kubenswrapper[4624]: I1008 16:15:05.795209 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332290-dr8n7"] Oct 08 16:15:07 crc kubenswrapper[4624]: I1008 16:15:07.484998 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90596db7-9869-43ee-bebd-750ae0727cc8" path="/var/lib/kubelet/pods/90596db7-9869-43ee-bebd-750ae0727cc8/volumes" Oct 08 16:15:12 crc kubenswrapper[4624]: I1008 16:15:12.466460 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:15:12 crc kubenswrapper[4624]: I1008 16:15:12.787988 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"52c18357016acbbd6b1e1648f48871179024590dde11b0c085b4a2ee195c6bad"} Oct 08 16:15:45 crc kubenswrapper[4624]: I1008 16:15:45.844692 4624 scope.go:117] "RemoveContainer" containerID="ed1fd668b44334640a6f5a3f0f901fdb4031fcb6efa44287095edf5cd64d8237" Oct 08 16:15:58 crc kubenswrapper[4624]: I1008 16:15:58.662669 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t5m8w"] Oct 08 16:15:58 crc kubenswrapper[4624]: E1008 16:15:58.663683 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d66eab-4886-4f59-aacb-c126f01f0f05" containerName="collect-profiles" Oct 08 16:15:58 crc kubenswrapper[4624]: I1008 16:15:58.663697 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d66eab-4886-4f59-aacb-c126f01f0f05" containerName="collect-profiles" Oct 08 16:15:58 crc kubenswrapper[4624]: I1008 16:15:58.663886 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3d66eab-4886-4f59-aacb-c126f01f0f05" containerName="collect-profiles" Oct 08 16:15:58 crc kubenswrapper[4624]: I1008 16:15:58.665430 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5m8w" Oct 08 16:15:58 crc kubenswrapper[4624]: I1008 16:15:58.675041 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5m8w"] Oct 08 16:15:58 crc kubenswrapper[4624]: I1008 16:15:58.732806 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0690dc2d-8922-4071-a597-b6353ec5c928-catalog-content\") pod \"redhat-operators-t5m8w\" (UID: \"0690dc2d-8922-4071-a597-b6353ec5c928\") " pod="openshift-marketplace/redhat-operators-t5m8w" Oct 08 16:15:58 crc kubenswrapper[4624]: I1008 16:15:58.733798 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0690dc2d-8922-4071-a597-b6353ec5c928-utilities\") pod \"redhat-operators-t5m8w\" (UID: \"0690dc2d-8922-4071-a597-b6353ec5c928\") " pod="openshift-marketplace/redhat-operators-t5m8w" Oct 08 16:15:58 crc kubenswrapper[4624]: I1008 16:15:58.734006 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxqlw\" (UniqueName: \"kubernetes.io/projected/0690dc2d-8922-4071-a597-b6353ec5c928-kube-api-access-rxqlw\") pod \"redhat-operators-t5m8w\" (UID: \"0690dc2d-8922-4071-a597-b6353ec5c928\") " pod="openshift-marketplace/redhat-operators-t5m8w" Oct 08 16:15:58 crc kubenswrapper[4624]: I1008 16:15:58.837164 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0690dc2d-8922-4071-a597-b6353ec5c928-utilities\") pod \"redhat-operators-t5m8w\" (UID: \"0690dc2d-8922-4071-a597-b6353ec5c928\") " pod="openshift-marketplace/redhat-operators-t5m8w" Oct 08 16:15:58 crc kubenswrapper[4624]: I1008 16:15:58.837241 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxqlw\" (UniqueName: \"kubernetes.io/projected/0690dc2d-8922-4071-a597-b6353ec5c928-kube-api-access-rxqlw\") pod \"redhat-operators-t5m8w\" (UID: \"0690dc2d-8922-4071-a597-b6353ec5c928\") " pod="openshift-marketplace/redhat-operators-t5m8w" Oct 08 16:15:58 crc kubenswrapper[4624]: I1008 16:15:58.837325 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0690dc2d-8922-4071-a597-b6353ec5c928-catalog-content\") pod \"redhat-operators-t5m8w\" (UID: \"0690dc2d-8922-4071-a597-b6353ec5c928\") " pod="openshift-marketplace/redhat-operators-t5m8w" Oct 08 16:15:58 crc kubenswrapper[4624]: I1008 16:15:58.838454 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0690dc2d-8922-4071-a597-b6353ec5c928-utilities\") pod \"redhat-operators-t5m8w\" (UID: \"0690dc2d-8922-4071-a597-b6353ec5c928\") " pod="openshift-marketplace/redhat-operators-t5m8w" Oct 08 16:15:58 crc kubenswrapper[4624]: I1008 16:15:58.841282 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0690dc2d-8922-4071-a597-b6353ec5c928-catalog-content\") pod \"redhat-operators-t5m8w\" (UID: \"0690dc2d-8922-4071-a597-b6353ec5c928\") " pod="openshift-marketplace/redhat-operators-t5m8w" Oct 08 16:15:58 crc kubenswrapper[4624]: I1008 16:15:58.865347 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxqlw\" (UniqueName: \"kubernetes.io/projected/0690dc2d-8922-4071-a597-b6353ec5c928-kube-api-access-rxqlw\") pod \"redhat-operators-t5m8w\" (UID: \"0690dc2d-8922-4071-a597-b6353ec5c928\") " pod="openshift-marketplace/redhat-operators-t5m8w" Oct 08 16:15:59 crc kubenswrapper[4624]: I1008 16:15:59.005051 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5m8w" Oct 08 16:15:59 crc kubenswrapper[4624]: I1008 16:15:59.707786 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5m8w"] Oct 08 16:16:00 crc kubenswrapper[4624]: I1008 16:16:00.347060 4624 generic.go:334] "Generic (PLEG): container finished" podID="0690dc2d-8922-4071-a597-b6353ec5c928" containerID="725034e7515fd8197279d93fa90b337b9dc47f9c9ce02b5ddb326309cc20a4cc" exitCode=0 Oct 08 16:16:00 crc kubenswrapper[4624]: I1008 16:16:00.347160 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5m8w" event={"ID":"0690dc2d-8922-4071-a597-b6353ec5c928","Type":"ContainerDied","Data":"725034e7515fd8197279d93fa90b337b9dc47f9c9ce02b5ddb326309cc20a4cc"} Oct 08 16:16:00 crc kubenswrapper[4624]: I1008 16:16:00.347439 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5m8w" event={"ID":"0690dc2d-8922-4071-a597-b6353ec5c928","Type":"ContainerStarted","Data":"eca61810fcce62ffcedbd05a4c7a87dfced868f47a48160fe081ec38b7a0feeb"} Oct 08 16:16:02 crc kubenswrapper[4624]: I1008 16:16:02.369421 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5m8w" event={"ID":"0690dc2d-8922-4071-a597-b6353ec5c928","Type":"ContainerStarted","Data":"541c841cefa0c476f57e1af90984d2acae94a88e3ea220fea1ae51595838d37b"} Oct 08 16:16:06 crc kubenswrapper[4624]: I1008 16:16:06.414167 4624 generic.go:334] "Generic (PLEG): container finished" podID="0690dc2d-8922-4071-a597-b6353ec5c928" containerID="541c841cefa0c476f57e1af90984d2acae94a88e3ea220fea1ae51595838d37b" exitCode=0 Oct 08 16:16:06 crc kubenswrapper[4624]: I1008 16:16:06.414934 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5m8w" event={"ID":"0690dc2d-8922-4071-a597-b6353ec5c928","Type":"ContainerDied","Data":"541c841cefa0c476f57e1af90984d2acae94a88e3ea220fea1ae51595838d37b"} Oct 08 16:16:07 crc kubenswrapper[4624]: I1008 16:16:07.427443 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5m8w" event={"ID":"0690dc2d-8922-4071-a597-b6353ec5c928","Type":"ContainerStarted","Data":"eccabecc21c95da59b910829440df6ae30d9577e7c2096c8490355447e7897fb"} Oct 08 16:16:07 crc kubenswrapper[4624]: I1008 16:16:07.455857 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t5m8w" podStartSLOduration=2.964592754 podStartE2EDuration="9.455827968s" podCreationTimestamp="2025-10-08 16:15:58 +0000 UTC" firstStartedPulling="2025-10-08 16:16:00.349469793 +0000 UTC m=+6785.500404870" lastFinishedPulling="2025-10-08 16:16:06.840704997 +0000 UTC m=+6791.991640084" observedRunningTime="2025-10-08 16:16:07.447557975 +0000 UTC m=+6792.598493052" watchObservedRunningTime="2025-10-08 16:16:07.455827968 +0000 UTC m=+6792.606763045" Oct 08 16:16:09 crc kubenswrapper[4624]: I1008 16:16:09.005335 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t5m8w" Oct 08 16:16:09 crc kubenswrapper[4624]: I1008 16:16:09.005753 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t5m8w" Oct 08 16:16:10 crc kubenswrapper[4624]: I1008 16:16:10.062890 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t5m8w" podUID="0690dc2d-8922-4071-a597-b6353ec5c928" containerName="registry-server" probeResult="failure" output=< Oct 08 16:16:10 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:16:10 crc kubenswrapper[4624]: > Oct 08 16:16:20 crc kubenswrapper[4624]: I1008 16:16:20.057876 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t5m8w" podUID="0690dc2d-8922-4071-a597-b6353ec5c928" containerName="registry-server" probeResult="failure" output=< Oct 08 16:16:20 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:16:20 crc kubenswrapper[4624]: > Oct 08 16:16:29 crc kubenswrapper[4624]: I1008 16:16:29.070036 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t5m8w" Oct 08 16:16:29 crc kubenswrapper[4624]: I1008 16:16:29.133747 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t5m8w" Oct 08 16:16:29 crc kubenswrapper[4624]: I1008 16:16:29.861485 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t5m8w"] Oct 08 16:16:30 crc kubenswrapper[4624]: I1008 16:16:30.689808 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t5m8w" podUID="0690dc2d-8922-4071-a597-b6353ec5c928" containerName="registry-server" containerID="cri-o://eccabecc21c95da59b910829440df6ae30d9577e7c2096c8490355447e7897fb" gracePeriod=2 Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.497804 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5m8w" Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.610494 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0690dc2d-8922-4071-a597-b6353ec5c928-catalog-content\") pod \"0690dc2d-8922-4071-a597-b6353ec5c928\" (UID: \"0690dc2d-8922-4071-a597-b6353ec5c928\") " Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.610717 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxqlw\" (UniqueName: \"kubernetes.io/projected/0690dc2d-8922-4071-a597-b6353ec5c928-kube-api-access-rxqlw\") pod \"0690dc2d-8922-4071-a597-b6353ec5c928\" (UID: \"0690dc2d-8922-4071-a597-b6353ec5c928\") " Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.611487 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0690dc2d-8922-4071-a597-b6353ec5c928-utilities\") pod \"0690dc2d-8922-4071-a597-b6353ec5c928\" (UID: \"0690dc2d-8922-4071-a597-b6353ec5c928\") " Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.612538 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0690dc2d-8922-4071-a597-b6353ec5c928-utilities" (OuterVolumeSpecName: "utilities") pod "0690dc2d-8922-4071-a597-b6353ec5c928" (UID: "0690dc2d-8922-4071-a597-b6353ec5c928"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.612804 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0690dc2d-8922-4071-a597-b6353ec5c928-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.621143 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0690dc2d-8922-4071-a597-b6353ec5c928-kube-api-access-rxqlw" (OuterVolumeSpecName: "kube-api-access-rxqlw") pod "0690dc2d-8922-4071-a597-b6353ec5c928" (UID: "0690dc2d-8922-4071-a597-b6353ec5c928"). InnerVolumeSpecName "kube-api-access-rxqlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.709009 4624 generic.go:334] "Generic (PLEG): container finished" podID="0690dc2d-8922-4071-a597-b6353ec5c928" containerID="eccabecc21c95da59b910829440df6ae30d9577e7c2096c8490355447e7897fb" exitCode=0 Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.709079 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5m8w" event={"ID":"0690dc2d-8922-4071-a597-b6353ec5c928","Type":"ContainerDied","Data":"eccabecc21c95da59b910829440df6ae30d9577e7c2096c8490355447e7897fb"} Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.709149 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5m8w" event={"ID":"0690dc2d-8922-4071-a597-b6353ec5c928","Type":"ContainerDied","Data":"eca61810fcce62ffcedbd05a4c7a87dfced868f47a48160fe081ec38b7a0feeb"} Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.709200 4624 scope.go:117] "RemoveContainer" containerID="eccabecc21c95da59b910829440df6ae30d9577e7c2096c8490355447e7897fb" Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.709509 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5m8w" Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.716422 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxqlw\" (UniqueName: \"kubernetes.io/projected/0690dc2d-8922-4071-a597-b6353ec5c928-kube-api-access-rxqlw\") on node \"crc\" DevicePath \"\"" Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.728941 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0690dc2d-8922-4071-a597-b6353ec5c928-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0690dc2d-8922-4071-a597-b6353ec5c928" (UID: "0690dc2d-8922-4071-a597-b6353ec5c928"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.743599 4624 scope.go:117] "RemoveContainer" containerID="541c841cefa0c476f57e1af90984d2acae94a88e3ea220fea1ae51595838d37b" Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.766521 4624 scope.go:117] "RemoveContainer" containerID="725034e7515fd8197279d93fa90b337b9dc47f9c9ce02b5ddb326309cc20a4cc" Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.822599 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0690dc2d-8922-4071-a597-b6353ec5c928-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.833245 4624 scope.go:117] "RemoveContainer" containerID="eccabecc21c95da59b910829440df6ae30d9577e7c2096c8490355447e7897fb" Oct 08 16:16:31 crc kubenswrapper[4624]: E1008 16:16:31.834313 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eccabecc21c95da59b910829440df6ae30d9577e7c2096c8490355447e7897fb\": container with ID starting with eccabecc21c95da59b910829440df6ae30d9577e7c2096c8490355447e7897fb not found: ID does not exist" containerID="eccabecc21c95da59b910829440df6ae30d9577e7c2096c8490355447e7897fb" Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.834356 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eccabecc21c95da59b910829440df6ae30d9577e7c2096c8490355447e7897fb"} err="failed to get container status \"eccabecc21c95da59b910829440df6ae30d9577e7c2096c8490355447e7897fb\": rpc error: code = NotFound desc = could not find container \"eccabecc21c95da59b910829440df6ae30d9577e7c2096c8490355447e7897fb\": container with ID starting with eccabecc21c95da59b910829440df6ae30d9577e7c2096c8490355447e7897fb not found: ID does not exist" Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.834391 4624 scope.go:117] "RemoveContainer" containerID="541c841cefa0c476f57e1af90984d2acae94a88e3ea220fea1ae51595838d37b" Oct 08 16:16:31 crc kubenswrapper[4624]: E1008 16:16:31.834723 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"541c841cefa0c476f57e1af90984d2acae94a88e3ea220fea1ae51595838d37b\": container with ID starting with 541c841cefa0c476f57e1af90984d2acae94a88e3ea220fea1ae51595838d37b not found: ID does not exist" containerID="541c841cefa0c476f57e1af90984d2acae94a88e3ea220fea1ae51595838d37b" Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.834760 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"541c841cefa0c476f57e1af90984d2acae94a88e3ea220fea1ae51595838d37b"} err="failed to get container status \"541c841cefa0c476f57e1af90984d2acae94a88e3ea220fea1ae51595838d37b\": rpc error: code = NotFound desc = could not find container \"541c841cefa0c476f57e1af90984d2acae94a88e3ea220fea1ae51595838d37b\": container with ID starting with 541c841cefa0c476f57e1af90984d2acae94a88e3ea220fea1ae51595838d37b not found: ID does not exist" Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.834791 4624 scope.go:117] "RemoveContainer" containerID="725034e7515fd8197279d93fa90b337b9dc47f9c9ce02b5ddb326309cc20a4cc" Oct 08 16:16:31 crc kubenswrapper[4624]: E1008 16:16:31.835216 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"725034e7515fd8197279d93fa90b337b9dc47f9c9ce02b5ddb326309cc20a4cc\": container with ID starting with 725034e7515fd8197279d93fa90b337b9dc47f9c9ce02b5ddb326309cc20a4cc not found: ID does not exist" containerID="725034e7515fd8197279d93fa90b337b9dc47f9c9ce02b5ddb326309cc20a4cc" Oct 08 16:16:31 crc kubenswrapper[4624]: I1008 16:16:31.835295 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"725034e7515fd8197279d93fa90b337b9dc47f9c9ce02b5ddb326309cc20a4cc"} err="failed to get container status \"725034e7515fd8197279d93fa90b337b9dc47f9c9ce02b5ddb326309cc20a4cc\": rpc error: code = NotFound desc = could not find container \"725034e7515fd8197279d93fa90b337b9dc47f9c9ce02b5ddb326309cc20a4cc\": container with ID starting with 725034e7515fd8197279d93fa90b337b9dc47f9c9ce02b5ddb326309cc20a4cc not found: ID does not exist" Oct 08 16:16:32 crc kubenswrapper[4624]: I1008 16:16:32.053239 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t5m8w"] Oct 08 16:16:32 crc kubenswrapper[4624]: I1008 16:16:32.064299 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t5m8w"] Oct 08 16:16:33 crc kubenswrapper[4624]: I1008 16:16:33.479320 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0690dc2d-8922-4071-a597-b6353ec5c928" path="/var/lib/kubelet/pods/0690dc2d-8922-4071-a597-b6353ec5c928/volumes" Oct 08 16:17:30 crc kubenswrapper[4624]: I1008 16:17:30.076260 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:17:30 crc kubenswrapper[4624]: I1008 16:17:30.078674 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:18:00 crc kubenswrapper[4624]: I1008 16:18:00.076199 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:18:00 crc kubenswrapper[4624]: I1008 16:18:00.076819 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:18:30 crc kubenswrapper[4624]: I1008 16:18:30.076807 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:18:30 crc kubenswrapper[4624]: I1008 16:18:30.077532 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:18:30 crc kubenswrapper[4624]: I1008 16:18:30.077601 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 16:18:30 crc kubenswrapper[4624]: I1008 16:18:30.078916 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"52c18357016acbbd6b1e1648f48871179024590dde11b0c085b4a2ee195c6bad"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 16:18:30 crc kubenswrapper[4624]: I1008 16:18:30.079002 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://52c18357016acbbd6b1e1648f48871179024590dde11b0c085b4a2ee195c6bad" gracePeriod=600 Oct 08 16:18:31 crc kubenswrapper[4624]: I1008 16:18:31.016465 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="52c18357016acbbd6b1e1648f48871179024590dde11b0c085b4a2ee195c6bad" exitCode=0 Oct 08 16:18:31 crc kubenswrapper[4624]: I1008 16:18:31.016566 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"52c18357016acbbd6b1e1648f48871179024590dde11b0c085b4a2ee195c6bad"} Oct 08 16:18:31 crc kubenswrapper[4624]: I1008 16:18:31.017386 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071"} Oct 08 16:18:31 crc kubenswrapper[4624]: I1008 16:18:31.017475 4624 scope.go:117] "RemoveContainer" containerID="282d73447a5507cd59c29b40e2cc4a40a06e41a6cfd4d16c73f2123081c89589" Oct 08 16:20:16 crc kubenswrapper[4624]: I1008 16:20:16.362230 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6dqgv"] Oct 08 16:20:16 crc kubenswrapper[4624]: E1008 16:20:16.363543 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0690dc2d-8922-4071-a597-b6353ec5c928" containerName="extract-utilities" Oct 08 16:20:16 crc kubenswrapper[4624]: I1008 16:20:16.363573 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0690dc2d-8922-4071-a597-b6353ec5c928" containerName="extract-utilities" Oct 08 16:20:16 crc kubenswrapper[4624]: E1008 16:20:16.363590 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0690dc2d-8922-4071-a597-b6353ec5c928" containerName="extract-content" Oct 08 16:20:16 crc kubenswrapper[4624]: I1008 16:20:16.363602 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0690dc2d-8922-4071-a597-b6353ec5c928" containerName="extract-content" Oct 08 16:20:16 crc kubenswrapper[4624]: E1008 16:20:16.363664 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0690dc2d-8922-4071-a597-b6353ec5c928" containerName="registry-server" Oct 08 16:20:16 crc kubenswrapper[4624]: I1008 16:20:16.363677 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0690dc2d-8922-4071-a597-b6353ec5c928" containerName="registry-server" Oct 08 16:20:16 crc kubenswrapper[4624]: I1008 16:20:16.364072 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="0690dc2d-8922-4071-a597-b6353ec5c928" containerName="registry-server" Oct 08 16:20:16 crc kubenswrapper[4624]: I1008 16:20:16.366606 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dqgv" Oct 08 16:20:16 crc kubenswrapper[4624]: I1008 16:20:16.376116 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dqgv"] Oct 08 16:20:16 crc kubenswrapper[4624]: I1008 16:20:16.485483 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddc8c61c-a417-4d93-8612-17f40a0bf0f4-catalog-content\") pod \"redhat-marketplace-6dqgv\" (UID: \"ddc8c61c-a417-4d93-8612-17f40a0bf0f4\") " pod="openshift-marketplace/redhat-marketplace-6dqgv" Oct 08 16:20:16 crc kubenswrapper[4624]: I1008 16:20:16.485851 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddc8c61c-a417-4d93-8612-17f40a0bf0f4-utilities\") pod \"redhat-marketplace-6dqgv\" (UID: \"ddc8c61c-a417-4d93-8612-17f40a0bf0f4\") " pod="openshift-marketplace/redhat-marketplace-6dqgv" Oct 08 16:20:16 crc kubenswrapper[4624]: I1008 16:20:16.485896 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z9k5\" (UniqueName: \"kubernetes.io/projected/ddc8c61c-a417-4d93-8612-17f40a0bf0f4-kube-api-access-9z9k5\") pod \"redhat-marketplace-6dqgv\" (UID: \"ddc8c61c-a417-4d93-8612-17f40a0bf0f4\") " pod="openshift-marketplace/redhat-marketplace-6dqgv" Oct 08 16:20:16 crc kubenswrapper[4624]: I1008 16:20:16.589315 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddc8c61c-a417-4d93-8612-17f40a0bf0f4-utilities\") pod \"redhat-marketplace-6dqgv\" (UID: \"ddc8c61c-a417-4d93-8612-17f40a0bf0f4\") " pod="openshift-marketplace/redhat-marketplace-6dqgv" Oct 08 16:20:16 crc kubenswrapper[4624]: I1008 16:20:16.589402 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z9k5\" (UniqueName: \"kubernetes.io/projected/ddc8c61c-a417-4d93-8612-17f40a0bf0f4-kube-api-access-9z9k5\") pod \"redhat-marketplace-6dqgv\" (UID: \"ddc8c61c-a417-4d93-8612-17f40a0bf0f4\") " pod="openshift-marketplace/redhat-marketplace-6dqgv" Oct 08 16:20:16 crc kubenswrapper[4624]: I1008 16:20:16.589666 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddc8c61c-a417-4d93-8612-17f40a0bf0f4-catalog-content\") pod \"redhat-marketplace-6dqgv\" (UID: \"ddc8c61c-a417-4d93-8612-17f40a0bf0f4\") " pod="openshift-marketplace/redhat-marketplace-6dqgv" Oct 08 16:20:16 crc kubenswrapper[4624]: I1008 16:20:16.589780 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddc8c61c-a417-4d93-8612-17f40a0bf0f4-utilities\") pod \"redhat-marketplace-6dqgv\" (UID: \"ddc8c61c-a417-4d93-8612-17f40a0bf0f4\") " pod="openshift-marketplace/redhat-marketplace-6dqgv" Oct 08 16:20:16 crc kubenswrapper[4624]: I1008 16:20:16.590049 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddc8c61c-a417-4d93-8612-17f40a0bf0f4-catalog-content\") pod \"redhat-marketplace-6dqgv\" (UID: \"ddc8c61c-a417-4d93-8612-17f40a0bf0f4\") " pod="openshift-marketplace/redhat-marketplace-6dqgv" Oct 08 16:20:16 crc kubenswrapper[4624]: I1008 16:20:16.612490 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z9k5\" (UniqueName: \"kubernetes.io/projected/ddc8c61c-a417-4d93-8612-17f40a0bf0f4-kube-api-access-9z9k5\") pod \"redhat-marketplace-6dqgv\" (UID: \"ddc8c61c-a417-4d93-8612-17f40a0bf0f4\") " pod="openshift-marketplace/redhat-marketplace-6dqgv" Oct 08 16:20:16 crc kubenswrapper[4624]: I1008 16:20:16.693969 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dqgv" Oct 08 16:20:17 crc kubenswrapper[4624]: I1008 16:20:17.205440 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dqgv"] Oct 08 16:20:18 crc kubenswrapper[4624]: I1008 16:20:18.170835 4624 generic.go:334] "Generic (PLEG): container finished" podID="ddc8c61c-a417-4d93-8612-17f40a0bf0f4" containerID="2c3452b66c53f4c1fd5d38a695cfa680c7a61f54ab35177cbd95d274a43a6cc2" exitCode=0 Oct 08 16:20:18 crc kubenswrapper[4624]: I1008 16:20:18.170942 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dqgv" event={"ID":"ddc8c61c-a417-4d93-8612-17f40a0bf0f4","Type":"ContainerDied","Data":"2c3452b66c53f4c1fd5d38a695cfa680c7a61f54ab35177cbd95d274a43a6cc2"} Oct 08 16:20:18 crc kubenswrapper[4624]: I1008 16:20:18.171227 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dqgv" event={"ID":"ddc8c61c-a417-4d93-8612-17f40a0bf0f4","Type":"ContainerStarted","Data":"09e0d032751a895c9353eb56ef7c3b96934327ca7c63ccbeeba9dd1e64c558f5"} Oct 08 16:20:18 crc kubenswrapper[4624]: I1008 16:20:18.173661 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 16:20:20 crc kubenswrapper[4624]: I1008 16:20:20.208015 4624 generic.go:334] "Generic (PLEG): container finished" podID="ddc8c61c-a417-4d93-8612-17f40a0bf0f4" containerID="d734339d8d2f91af87e393598fffc3efeee9fb3cdaed2818203cc94051fcd0ed" exitCode=0 Oct 08 16:20:20 crc kubenswrapper[4624]: I1008 16:20:20.209105 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dqgv" event={"ID":"ddc8c61c-a417-4d93-8612-17f40a0bf0f4","Type":"ContainerDied","Data":"d734339d8d2f91af87e393598fffc3efeee9fb3cdaed2818203cc94051fcd0ed"} Oct 08 16:20:21 crc kubenswrapper[4624]: I1008 16:20:21.131285 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bsmps"] Oct 08 16:20:21 crc kubenswrapper[4624]: I1008 16:20:21.134709 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsmps" Oct 08 16:20:21 crc kubenswrapper[4624]: I1008 16:20:21.150428 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bsmps"] Oct 08 16:20:21 crc kubenswrapper[4624]: I1008 16:20:21.225821 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dqgv" event={"ID":"ddc8c61c-a417-4d93-8612-17f40a0bf0f4","Type":"ContainerStarted","Data":"83eee12626d2922ce64ab0b30cb2ab372cf727e9041964eec594bde3b2f05536"} Oct 08 16:20:21 crc kubenswrapper[4624]: I1008 16:20:21.255862 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6dqgv" podStartSLOduration=2.820304636 podStartE2EDuration="5.255828416s" podCreationTimestamp="2025-10-08 16:20:16 +0000 UTC" firstStartedPulling="2025-10-08 16:20:18.173147426 +0000 UTC m=+7043.324082513" lastFinishedPulling="2025-10-08 16:20:20.608671226 +0000 UTC m=+7045.759606293" observedRunningTime="2025-10-08 16:20:21.244900555 +0000 UTC m=+7046.395835642" watchObservedRunningTime="2025-10-08 16:20:21.255828416 +0000 UTC m=+7046.406763513" Oct 08 16:20:21 crc kubenswrapper[4624]: I1008 16:20:21.302040 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1590c57-8518-4ce4-a823-9f5ab80fa16e-catalog-content\") pod \"certified-operators-bsmps\" (UID: \"a1590c57-8518-4ce4-a823-9f5ab80fa16e\") " pod="openshift-marketplace/certified-operators-bsmps" Oct 08 16:20:21 crc kubenswrapper[4624]: I1008 16:20:21.302129 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf2jq\" (UniqueName: \"kubernetes.io/projected/a1590c57-8518-4ce4-a823-9f5ab80fa16e-kube-api-access-kf2jq\") pod \"certified-operators-bsmps\" (UID: \"a1590c57-8518-4ce4-a823-9f5ab80fa16e\") " pod="openshift-marketplace/certified-operators-bsmps" Oct 08 16:20:21 crc kubenswrapper[4624]: I1008 16:20:21.302717 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1590c57-8518-4ce4-a823-9f5ab80fa16e-utilities\") pod \"certified-operators-bsmps\" (UID: \"a1590c57-8518-4ce4-a823-9f5ab80fa16e\") " pod="openshift-marketplace/certified-operators-bsmps" Oct 08 16:20:21 crc kubenswrapper[4624]: I1008 16:20:21.405004 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1590c57-8518-4ce4-a823-9f5ab80fa16e-catalog-content\") pod \"certified-operators-bsmps\" (UID: \"a1590c57-8518-4ce4-a823-9f5ab80fa16e\") " pod="openshift-marketplace/certified-operators-bsmps" Oct 08 16:20:21 crc kubenswrapper[4624]: I1008 16:20:21.405099 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf2jq\" (UniqueName: \"kubernetes.io/projected/a1590c57-8518-4ce4-a823-9f5ab80fa16e-kube-api-access-kf2jq\") pod \"certified-operators-bsmps\" (UID: \"a1590c57-8518-4ce4-a823-9f5ab80fa16e\") " pod="openshift-marketplace/certified-operators-bsmps" Oct 08 16:20:21 crc kubenswrapper[4624]: I1008 16:20:21.405295 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1590c57-8518-4ce4-a823-9f5ab80fa16e-utilities\") pod \"certified-operators-bsmps\" (UID: \"a1590c57-8518-4ce4-a823-9f5ab80fa16e\") " pod="openshift-marketplace/certified-operators-bsmps" Oct 08 16:20:21 crc kubenswrapper[4624]: I1008 16:20:21.405594 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1590c57-8518-4ce4-a823-9f5ab80fa16e-catalog-content\") pod \"certified-operators-bsmps\" (UID: \"a1590c57-8518-4ce4-a823-9f5ab80fa16e\") " pod="openshift-marketplace/certified-operators-bsmps" Oct 08 16:20:21 crc kubenswrapper[4624]: I1008 16:20:21.405669 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1590c57-8518-4ce4-a823-9f5ab80fa16e-utilities\") pod \"certified-operators-bsmps\" (UID: \"a1590c57-8518-4ce4-a823-9f5ab80fa16e\") " pod="openshift-marketplace/certified-operators-bsmps" Oct 08 16:20:21 crc kubenswrapper[4624]: I1008 16:20:21.428255 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf2jq\" (UniqueName: \"kubernetes.io/projected/a1590c57-8518-4ce4-a823-9f5ab80fa16e-kube-api-access-kf2jq\") pod \"certified-operators-bsmps\" (UID: \"a1590c57-8518-4ce4-a823-9f5ab80fa16e\") " pod="openshift-marketplace/certified-operators-bsmps" Oct 08 16:20:21 crc kubenswrapper[4624]: I1008 16:20:21.462442 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsmps" Oct 08 16:20:22 crc kubenswrapper[4624]: I1008 16:20:22.237282 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bsmps"] Oct 08 16:20:23 crc kubenswrapper[4624]: I1008 16:20:23.254814 4624 generic.go:334] "Generic (PLEG): container finished" podID="a1590c57-8518-4ce4-a823-9f5ab80fa16e" containerID="bcf4ce05fac68b1244972022266f9d81e1464f03060f94f24b1626eb7572728c" exitCode=0 Oct 08 16:20:23 crc kubenswrapper[4624]: I1008 16:20:23.254986 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsmps" event={"ID":"a1590c57-8518-4ce4-a823-9f5ab80fa16e","Type":"ContainerDied","Data":"bcf4ce05fac68b1244972022266f9d81e1464f03060f94f24b1626eb7572728c"} Oct 08 16:20:23 crc kubenswrapper[4624]: I1008 16:20:23.255261 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsmps" event={"ID":"a1590c57-8518-4ce4-a823-9f5ab80fa16e","Type":"ContainerStarted","Data":"196830f7199ee8b0d4fc62ac76db69e2c30ba2fec38d1b66cdee6a1c672fcfba"} Oct 08 16:20:25 crc kubenswrapper[4624]: I1008 16:20:25.301622 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsmps" event={"ID":"a1590c57-8518-4ce4-a823-9f5ab80fa16e","Type":"ContainerStarted","Data":"f1aa328a635d2f628d3c110671261416a102a2f5b1d99949e4302f31306a5058"} Oct 08 16:20:26 crc kubenswrapper[4624]: I1008 16:20:26.313512 4624 generic.go:334] "Generic (PLEG): container finished" podID="a1590c57-8518-4ce4-a823-9f5ab80fa16e" containerID="f1aa328a635d2f628d3c110671261416a102a2f5b1d99949e4302f31306a5058" exitCode=0 Oct 08 16:20:26 crc kubenswrapper[4624]: I1008 16:20:26.313697 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsmps" event={"ID":"a1590c57-8518-4ce4-a823-9f5ab80fa16e","Type":"ContainerDied","Data":"f1aa328a635d2f628d3c110671261416a102a2f5b1d99949e4302f31306a5058"} Oct 08 16:20:26 crc kubenswrapper[4624]: I1008 16:20:26.695309 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6dqgv" Oct 08 16:20:26 crc kubenswrapper[4624]: I1008 16:20:26.695379 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6dqgv" Oct 08 16:20:26 crc kubenswrapper[4624]: I1008 16:20:26.748574 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6dqgv" Oct 08 16:20:27 crc kubenswrapper[4624]: I1008 16:20:27.328928 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsmps" event={"ID":"a1590c57-8518-4ce4-a823-9f5ab80fa16e","Type":"ContainerStarted","Data":"890ac07db055e4c194deca7a8c04c9c0f5ca6bbf35596c1217322eb1bef2e258"} Oct 08 16:20:27 crc kubenswrapper[4624]: I1008 16:20:27.361156 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bsmps" podStartSLOduration=2.778085794 podStartE2EDuration="6.361132083s" podCreationTimestamp="2025-10-08 16:20:21 +0000 UTC" firstStartedPulling="2025-10-08 16:20:23.257108566 +0000 UTC m=+7048.408043643" lastFinishedPulling="2025-10-08 16:20:26.840154855 +0000 UTC m=+7051.991089932" observedRunningTime="2025-10-08 16:20:27.357030788 +0000 UTC m=+7052.507965875" watchObservedRunningTime="2025-10-08 16:20:27.361132083 +0000 UTC m=+7052.512067160" Oct 08 16:20:27 crc kubenswrapper[4624]: I1008 16:20:27.391924 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6dqgv" Oct 08 16:20:29 crc kubenswrapper[4624]: I1008 16:20:29.123700 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dqgv"] Oct 08 16:20:29 crc kubenswrapper[4624]: I1008 16:20:29.345453 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6dqgv" podUID="ddc8c61c-a417-4d93-8612-17f40a0bf0f4" containerName="registry-server" containerID="cri-o://83eee12626d2922ce64ab0b30cb2ab372cf727e9041964eec594bde3b2f05536" gracePeriod=2 Oct 08 16:20:29 crc kubenswrapper[4624]: I1008 16:20:29.969556 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dqgv" Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.076305 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.076622 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.111211 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z9k5\" (UniqueName: \"kubernetes.io/projected/ddc8c61c-a417-4d93-8612-17f40a0bf0f4-kube-api-access-9z9k5\") pod \"ddc8c61c-a417-4d93-8612-17f40a0bf0f4\" (UID: \"ddc8c61c-a417-4d93-8612-17f40a0bf0f4\") " Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.111323 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddc8c61c-a417-4d93-8612-17f40a0bf0f4-catalog-content\") pod \"ddc8c61c-a417-4d93-8612-17f40a0bf0f4\" (UID: \"ddc8c61c-a417-4d93-8612-17f40a0bf0f4\") " Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.111410 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddc8c61c-a417-4d93-8612-17f40a0bf0f4-utilities\") pod \"ddc8c61c-a417-4d93-8612-17f40a0bf0f4\" (UID: \"ddc8c61c-a417-4d93-8612-17f40a0bf0f4\") " Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.112309 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddc8c61c-a417-4d93-8612-17f40a0bf0f4-utilities" (OuterVolumeSpecName: "utilities") pod "ddc8c61c-a417-4d93-8612-17f40a0bf0f4" (UID: "ddc8c61c-a417-4d93-8612-17f40a0bf0f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.127487 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddc8c61c-a417-4d93-8612-17f40a0bf0f4-kube-api-access-9z9k5" (OuterVolumeSpecName: "kube-api-access-9z9k5") pod "ddc8c61c-a417-4d93-8612-17f40a0bf0f4" (UID: "ddc8c61c-a417-4d93-8612-17f40a0bf0f4"). InnerVolumeSpecName "kube-api-access-9z9k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.128043 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddc8c61c-a417-4d93-8612-17f40a0bf0f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddc8c61c-a417-4d93-8612-17f40a0bf0f4" (UID: "ddc8c61c-a417-4d93-8612-17f40a0bf0f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.214180 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddc8c61c-a417-4d93-8612-17f40a0bf0f4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.214218 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddc8c61c-a417-4d93-8612-17f40a0bf0f4-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.214230 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z9k5\" (UniqueName: \"kubernetes.io/projected/ddc8c61c-a417-4d93-8612-17f40a0bf0f4-kube-api-access-9z9k5\") on node \"crc\" DevicePath \"\"" Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.357436 4624 generic.go:334] "Generic (PLEG): container finished" podID="ddc8c61c-a417-4d93-8612-17f40a0bf0f4" containerID="83eee12626d2922ce64ab0b30cb2ab372cf727e9041964eec594bde3b2f05536" exitCode=0 Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.357500 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dqgv" event={"ID":"ddc8c61c-a417-4d93-8612-17f40a0bf0f4","Type":"ContainerDied","Data":"83eee12626d2922ce64ab0b30cb2ab372cf727e9041964eec594bde3b2f05536"} Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.357536 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dqgv" event={"ID":"ddc8c61c-a417-4d93-8612-17f40a0bf0f4","Type":"ContainerDied","Data":"09e0d032751a895c9353eb56ef7c3b96934327ca7c63ccbeeba9dd1e64c558f5"} Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.357560 4624 scope.go:117] "RemoveContainer" containerID="83eee12626d2922ce64ab0b30cb2ab372cf727e9041964eec594bde3b2f05536" Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.357761 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dqgv" Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.396363 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dqgv"] Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.400799 4624 scope.go:117] "RemoveContainer" containerID="d734339d8d2f91af87e393598fffc3efeee9fb3cdaed2818203cc94051fcd0ed" Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.405822 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dqgv"] Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.425048 4624 scope.go:117] "RemoveContainer" containerID="2c3452b66c53f4c1fd5d38a695cfa680c7a61f54ab35177cbd95d274a43a6cc2" Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.480160 4624 scope.go:117] "RemoveContainer" containerID="83eee12626d2922ce64ab0b30cb2ab372cf727e9041964eec594bde3b2f05536" Oct 08 16:20:30 crc kubenswrapper[4624]: E1008 16:20:30.480977 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83eee12626d2922ce64ab0b30cb2ab372cf727e9041964eec594bde3b2f05536\": container with ID starting with 83eee12626d2922ce64ab0b30cb2ab372cf727e9041964eec594bde3b2f05536 not found: ID does not exist" containerID="83eee12626d2922ce64ab0b30cb2ab372cf727e9041964eec594bde3b2f05536" Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.481063 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83eee12626d2922ce64ab0b30cb2ab372cf727e9041964eec594bde3b2f05536"} err="failed to get container status \"83eee12626d2922ce64ab0b30cb2ab372cf727e9041964eec594bde3b2f05536\": rpc error: code = NotFound desc = could not find container \"83eee12626d2922ce64ab0b30cb2ab372cf727e9041964eec594bde3b2f05536\": container with ID starting with 83eee12626d2922ce64ab0b30cb2ab372cf727e9041964eec594bde3b2f05536 not found: ID does not exist" Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.481109 4624 scope.go:117] "RemoveContainer" containerID="d734339d8d2f91af87e393598fffc3efeee9fb3cdaed2818203cc94051fcd0ed" Oct 08 16:20:30 crc kubenswrapper[4624]: E1008 16:20:30.481787 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d734339d8d2f91af87e393598fffc3efeee9fb3cdaed2818203cc94051fcd0ed\": container with ID starting with d734339d8d2f91af87e393598fffc3efeee9fb3cdaed2818203cc94051fcd0ed not found: ID does not exist" containerID="d734339d8d2f91af87e393598fffc3efeee9fb3cdaed2818203cc94051fcd0ed" Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.481812 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d734339d8d2f91af87e393598fffc3efeee9fb3cdaed2818203cc94051fcd0ed"} err="failed to get container status \"d734339d8d2f91af87e393598fffc3efeee9fb3cdaed2818203cc94051fcd0ed\": rpc error: code = NotFound desc = could not find container \"d734339d8d2f91af87e393598fffc3efeee9fb3cdaed2818203cc94051fcd0ed\": container with ID starting with d734339d8d2f91af87e393598fffc3efeee9fb3cdaed2818203cc94051fcd0ed not found: ID does not exist" Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.481826 4624 scope.go:117] "RemoveContainer" containerID="2c3452b66c53f4c1fd5d38a695cfa680c7a61f54ab35177cbd95d274a43a6cc2" Oct 08 16:20:30 crc kubenswrapper[4624]: E1008 16:20:30.482325 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c3452b66c53f4c1fd5d38a695cfa680c7a61f54ab35177cbd95d274a43a6cc2\": container with ID starting with 2c3452b66c53f4c1fd5d38a695cfa680c7a61f54ab35177cbd95d274a43a6cc2 not found: ID does not exist" containerID="2c3452b66c53f4c1fd5d38a695cfa680c7a61f54ab35177cbd95d274a43a6cc2" Oct 08 16:20:30 crc kubenswrapper[4624]: I1008 16:20:30.482365 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c3452b66c53f4c1fd5d38a695cfa680c7a61f54ab35177cbd95d274a43a6cc2"} err="failed to get container status \"2c3452b66c53f4c1fd5d38a695cfa680c7a61f54ab35177cbd95d274a43a6cc2\": rpc error: code = NotFound desc = could not find container \"2c3452b66c53f4c1fd5d38a695cfa680c7a61f54ab35177cbd95d274a43a6cc2\": container with ID starting with 2c3452b66c53f4c1fd5d38a695cfa680c7a61f54ab35177cbd95d274a43a6cc2 not found: ID does not exist" Oct 08 16:20:31 crc kubenswrapper[4624]: I1008 16:20:31.462824 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bsmps" Oct 08 16:20:31 crc kubenswrapper[4624]: I1008 16:20:31.463170 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bsmps" Oct 08 16:20:31 crc kubenswrapper[4624]: I1008 16:20:31.481770 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddc8c61c-a417-4d93-8612-17f40a0bf0f4" path="/var/lib/kubelet/pods/ddc8c61c-a417-4d93-8612-17f40a0bf0f4/volumes" Oct 08 16:20:31 crc kubenswrapper[4624]: I1008 16:20:31.526074 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bsmps" Oct 08 16:20:32 crc kubenswrapper[4624]: I1008 16:20:32.429888 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bsmps" Oct 08 16:20:33 crc kubenswrapper[4624]: I1008 16:20:33.518873 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bsmps"] Oct 08 16:20:34 crc kubenswrapper[4624]: I1008 16:20:34.399196 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bsmps" podUID="a1590c57-8518-4ce4-a823-9f5ab80fa16e" containerName="registry-server" containerID="cri-o://890ac07db055e4c194deca7a8c04c9c0f5ca6bbf35596c1217322eb1bef2e258" gracePeriod=2 Oct 08 16:20:34 crc kubenswrapper[4624]: E1008 16:20:34.674946 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1590c57_8518_4ce4_a823_9f5ab80fa16e.slice/crio-890ac07db055e4c194deca7a8c04c9c0f5ca6bbf35596c1217322eb1bef2e258.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1590c57_8518_4ce4_a823_9f5ab80fa16e.slice/crio-conmon-890ac07db055e4c194deca7a8c04c9c0f5ca6bbf35596c1217322eb1bef2e258.scope\": RecentStats: unable to find data in memory cache]" Oct 08 16:20:34 crc kubenswrapper[4624]: I1008 16:20:34.973077 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsmps" Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.125190 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1590c57-8518-4ce4-a823-9f5ab80fa16e-utilities\") pod \"a1590c57-8518-4ce4-a823-9f5ab80fa16e\" (UID: \"a1590c57-8518-4ce4-a823-9f5ab80fa16e\") " Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.125738 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1590c57-8518-4ce4-a823-9f5ab80fa16e-catalog-content\") pod \"a1590c57-8518-4ce4-a823-9f5ab80fa16e\" (UID: \"a1590c57-8518-4ce4-a823-9f5ab80fa16e\") " Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.125843 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf2jq\" (UniqueName: \"kubernetes.io/projected/a1590c57-8518-4ce4-a823-9f5ab80fa16e-kube-api-access-kf2jq\") pod \"a1590c57-8518-4ce4-a823-9f5ab80fa16e\" (UID: \"a1590c57-8518-4ce4-a823-9f5ab80fa16e\") " Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.126469 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1590c57-8518-4ce4-a823-9f5ab80fa16e-utilities" (OuterVolumeSpecName: "utilities") pod "a1590c57-8518-4ce4-a823-9f5ab80fa16e" (UID: "a1590c57-8518-4ce4-a823-9f5ab80fa16e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.127243 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1590c57-8518-4ce4-a823-9f5ab80fa16e-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.135155 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1590c57-8518-4ce4-a823-9f5ab80fa16e-kube-api-access-kf2jq" (OuterVolumeSpecName: "kube-api-access-kf2jq") pod "a1590c57-8518-4ce4-a823-9f5ab80fa16e" (UID: "a1590c57-8518-4ce4-a823-9f5ab80fa16e"). InnerVolumeSpecName "kube-api-access-kf2jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.189926 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1590c57-8518-4ce4-a823-9f5ab80fa16e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1590c57-8518-4ce4-a823-9f5ab80fa16e" (UID: "a1590c57-8518-4ce4-a823-9f5ab80fa16e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.229001 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1590c57-8518-4ce4-a823-9f5ab80fa16e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.229035 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf2jq\" (UniqueName: \"kubernetes.io/projected/a1590c57-8518-4ce4-a823-9f5ab80fa16e-kube-api-access-kf2jq\") on node \"crc\" DevicePath \"\"" Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.411315 4624 generic.go:334] "Generic (PLEG): container finished" podID="a1590c57-8518-4ce4-a823-9f5ab80fa16e" containerID="890ac07db055e4c194deca7a8c04c9c0f5ca6bbf35596c1217322eb1bef2e258" exitCode=0 Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.411370 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsmps" event={"ID":"a1590c57-8518-4ce4-a823-9f5ab80fa16e","Type":"ContainerDied","Data":"890ac07db055e4c194deca7a8c04c9c0f5ca6bbf35596c1217322eb1bef2e258"} Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.411408 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsmps" event={"ID":"a1590c57-8518-4ce4-a823-9f5ab80fa16e","Type":"ContainerDied","Data":"196830f7199ee8b0d4fc62ac76db69e2c30ba2fec38d1b66cdee6a1c672fcfba"} Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.411398 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsmps" Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.411430 4624 scope.go:117] "RemoveContainer" containerID="890ac07db055e4c194deca7a8c04c9c0f5ca6bbf35596c1217322eb1bef2e258" Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.454988 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bsmps"] Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.460562 4624 scope.go:117] "RemoveContainer" containerID="f1aa328a635d2f628d3c110671261416a102a2f5b1d99949e4302f31306a5058" Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.464420 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bsmps"] Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.485225 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1590c57-8518-4ce4-a823-9f5ab80fa16e" path="/var/lib/kubelet/pods/a1590c57-8518-4ce4-a823-9f5ab80fa16e/volumes" Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.489368 4624 scope.go:117] "RemoveContainer" containerID="bcf4ce05fac68b1244972022266f9d81e1464f03060f94f24b1626eb7572728c" Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.530518 4624 scope.go:117] "RemoveContainer" containerID="890ac07db055e4c194deca7a8c04c9c0f5ca6bbf35596c1217322eb1bef2e258" Oct 08 16:20:35 crc kubenswrapper[4624]: E1008 16:20:35.531291 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"890ac07db055e4c194deca7a8c04c9c0f5ca6bbf35596c1217322eb1bef2e258\": container with ID starting with 890ac07db055e4c194deca7a8c04c9c0f5ca6bbf35596c1217322eb1bef2e258 not found: ID does not exist" containerID="890ac07db055e4c194deca7a8c04c9c0f5ca6bbf35596c1217322eb1bef2e258" Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.531346 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"890ac07db055e4c194deca7a8c04c9c0f5ca6bbf35596c1217322eb1bef2e258"} err="failed to get container status \"890ac07db055e4c194deca7a8c04c9c0f5ca6bbf35596c1217322eb1bef2e258\": rpc error: code = NotFound desc = could not find container \"890ac07db055e4c194deca7a8c04c9c0f5ca6bbf35596c1217322eb1bef2e258\": container with ID starting with 890ac07db055e4c194deca7a8c04c9c0f5ca6bbf35596c1217322eb1bef2e258 not found: ID does not exist" Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.531385 4624 scope.go:117] "RemoveContainer" containerID="f1aa328a635d2f628d3c110671261416a102a2f5b1d99949e4302f31306a5058" Oct 08 16:20:35 crc kubenswrapper[4624]: E1008 16:20:35.532118 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1aa328a635d2f628d3c110671261416a102a2f5b1d99949e4302f31306a5058\": container with ID starting with f1aa328a635d2f628d3c110671261416a102a2f5b1d99949e4302f31306a5058 not found: ID does not exist" containerID="f1aa328a635d2f628d3c110671261416a102a2f5b1d99949e4302f31306a5058" Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.532188 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1aa328a635d2f628d3c110671261416a102a2f5b1d99949e4302f31306a5058"} err="failed to get container status \"f1aa328a635d2f628d3c110671261416a102a2f5b1d99949e4302f31306a5058\": rpc error: code = NotFound desc = could not find container \"f1aa328a635d2f628d3c110671261416a102a2f5b1d99949e4302f31306a5058\": container with ID starting with f1aa328a635d2f628d3c110671261416a102a2f5b1d99949e4302f31306a5058 not found: ID does not exist" Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.532236 4624 scope.go:117] "RemoveContainer" containerID="bcf4ce05fac68b1244972022266f9d81e1464f03060f94f24b1626eb7572728c" Oct 08 16:20:35 crc kubenswrapper[4624]: E1008 16:20:35.532743 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcf4ce05fac68b1244972022266f9d81e1464f03060f94f24b1626eb7572728c\": container with ID starting with bcf4ce05fac68b1244972022266f9d81e1464f03060f94f24b1626eb7572728c not found: ID does not exist" containerID="bcf4ce05fac68b1244972022266f9d81e1464f03060f94f24b1626eb7572728c" Oct 08 16:20:35 crc kubenswrapper[4624]: I1008 16:20:35.532778 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcf4ce05fac68b1244972022266f9d81e1464f03060f94f24b1626eb7572728c"} err="failed to get container status \"bcf4ce05fac68b1244972022266f9d81e1464f03060f94f24b1626eb7572728c\": rpc error: code = NotFound desc = could not find container \"bcf4ce05fac68b1244972022266f9d81e1464f03060f94f24b1626eb7572728c\": container with ID starting with bcf4ce05fac68b1244972022266f9d81e1464f03060f94f24b1626eb7572728c not found: ID does not exist" Oct 08 16:21:00 crc kubenswrapper[4624]: I1008 16:21:00.076571 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:21:00 crc kubenswrapper[4624]: I1008 16:21:00.077895 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:21:30 crc kubenswrapper[4624]: I1008 16:21:30.076903 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:21:30 crc kubenswrapper[4624]: I1008 16:21:30.077520 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:21:30 crc kubenswrapper[4624]: I1008 16:21:30.077588 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 16:21:30 crc kubenswrapper[4624]: I1008 16:21:30.078604 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 16:21:30 crc kubenswrapper[4624]: I1008 16:21:30.078691 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" gracePeriod=600 Oct 08 16:21:30 crc kubenswrapper[4624]: E1008 16:21:30.208258 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:21:30 crc kubenswrapper[4624]: I1008 16:21:30.996121 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" exitCode=0 Oct 08 16:21:30 crc kubenswrapper[4624]: I1008 16:21:30.996205 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071"} Oct 08 16:21:30 crc kubenswrapper[4624]: I1008 16:21:30.996705 4624 scope.go:117] "RemoveContainer" containerID="52c18357016acbbd6b1e1648f48871179024590dde11b0c085b4a2ee195c6bad" Oct 08 16:21:30 crc kubenswrapper[4624]: I1008 16:21:30.997579 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:21:30 crc kubenswrapper[4624]: E1008 16:21:30.997928 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:21:43 crc kubenswrapper[4624]: I1008 16:21:43.466591 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:21:43 crc kubenswrapper[4624]: E1008 16:21:43.467514 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:21:54 crc kubenswrapper[4624]: I1008 16:21:54.466546 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:21:54 crc kubenswrapper[4624]: E1008 16:21:54.467719 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:22:06 crc kubenswrapper[4624]: I1008 16:22:06.465894 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:22:06 crc kubenswrapper[4624]: E1008 16:22:06.466760 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:22:17 crc kubenswrapper[4624]: I1008 16:22:17.466446 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:22:17 crc kubenswrapper[4624]: E1008 16:22:17.467071 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:22:28 crc kubenswrapper[4624]: I1008 16:22:28.465595 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:22:28 crc kubenswrapper[4624]: E1008 16:22:28.466417 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:22:40 crc kubenswrapper[4624]: I1008 16:22:40.467074 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:22:40 crc kubenswrapper[4624]: E1008 16:22:40.468079 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:22:52 crc kubenswrapper[4624]: I1008 16:22:52.466777 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:22:52 crc kubenswrapper[4624]: E1008 16:22:52.467833 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:23:07 crc kubenswrapper[4624]: I1008 16:23:07.466376 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:23:07 crc kubenswrapper[4624]: E1008 16:23:07.467244 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:23:20 crc kubenswrapper[4624]: I1008 16:23:20.465897 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:23:20 crc kubenswrapper[4624]: E1008 16:23:20.466670 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:23:31 crc kubenswrapper[4624]: I1008 16:23:31.466823 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:23:31 crc kubenswrapper[4624]: E1008 16:23:31.467856 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:23:44 crc kubenswrapper[4624]: I1008 16:23:44.466331 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:23:44 crc kubenswrapper[4624]: E1008 16:23:44.468453 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.345340 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lqcgf"] Oct 08 16:23:47 crc kubenswrapper[4624]: E1008 16:23:47.346220 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1590c57-8518-4ce4-a823-9f5ab80fa16e" containerName="extract-content" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.346240 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1590c57-8518-4ce4-a823-9f5ab80fa16e" containerName="extract-content" Oct 08 16:23:47 crc kubenswrapper[4624]: E1008 16:23:47.346263 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1590c57-8518-4ce4-a823-9f5ab80fa16e" containerName="registry-server" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.346270 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1590c57-8518-4ce4-a823-9f5ab80fa16e" containerName="registry-server" Oct 08 16:23:47 crc kubenswrapper[4624]: E1008 16:23:47.346287 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddc8c61c-a417-4d93-8612-17f40a0bf0f4" containerName="extract-utilities" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.346296 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc8c61c-a417-4d93-8612-17f40a0bf0f4" containerName="extract-utilities" Oct 08 16:23:47 crc kubenswrapper[4624]: E1008 16:23:47.346323 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddc8c61c-a417-4d93-8612-17f40a0bf0f4" containerName="registry-server" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.346331 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc8c61c-a417-4d93-8612-17f40a0bf0f4" containerName="registry-server" Oct 08 16:23:47 crc kubenswrapper[4624]: E1008 16:23:47.346347 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddc8c61c-a417-4d93-8612-17f40a0bf0f4" containerName="extract-content" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.346354 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc8c61c-a417-4d93-8612-17f40a0bf0f4" containerName="extract-content" Oct 08 16:23:47 crc kubenswrapper[4624]: E1008 16:23:47.346378 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1590c57-8518-4ce4-a823-9f5ab80fa16e" containerName="extract-utilities" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.346386 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1590c57-8518-4ce4-a823-9f5ab80fa16e" containerName="extract-utilities" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.346656 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddc8c61c-a417-4d93-8612-17f40a0bf0f4" containerName="registry-server" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.352758 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1590c57-8518-4ce4-a823-9f5ab80fa16e" containerName="registry-server" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.355243 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lqcgf" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.362737 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lqcgf"] Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.410573 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9-utilities\") pod \"community-operators-lqcgf\" (UID: \"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9\") " pod="openshift-marketplace/community-operators-lqcgf" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.410639 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9-catalog-content\") pod \"community-operators-lqcgf\" (UID: \"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9\") " pod="openshift-marketplace/community-operators-lqcgf" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.410804 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sgdj\" (UniqueName: \"kubernetes.io/projected/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9-kube-api-access-7sgdj\") pod \"community-operators-lqcgf\" (UID: \"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9\") " pod="openshift-marketplace/community-operators-lqcgf" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.513006 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9-utilities\") pod \"community-operators-lqcgf\" (UID: \"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9\") " pod="openshift-marketplace/community-operators-lqcgf" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.513084 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9-catalog-content\") pod \"community-operators-lqcgf\" (UID: \"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9\") " pod="openshift-marketplace/community-operators-lqcgf" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.513171 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sgdj\" (UniqueName: \"kubernetes.io/projected/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9-kube-api-access-7sgdj\") pod \"community-operators-lqcgf\" (UID: \"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9\") " pod="openshift-marketplace/community-operators-lqcgf" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.513554 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9-utilities\") pod \"community-operators-lqcgf\" (UID: \"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9\") " pod="openshift-marketplace/community-operators-lqcgf" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.513849 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9-catalog-content\") pod \"community-operators-lqcgf\" (UID: \"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9\") " pod="openshift-marketplace/community-operators-lqcgf" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.537907 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sgdj\" (UniqueName: \"kubernetes.io/projected/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9-kube-api-access-7sgdj\") pod \"community-operators-lqcgf\" (UID: \"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9\") " pod="openshift-marketplace/community-operators-lqcgf" Oct 08 16:23:47 crc kubenswrapper[4624]: I1008 16:23:47.685385 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lqcgf" Oct 08 16:23:48 crc kubenswrapper[4624]: I1008 16:23:48.510947 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lqcgf"] Oct 08 16:23:49 crc kubenswrapper[4624]: I1008 16:23:49.508474 4624 generic.go:334] "Generic (PLEG): container finished" podID="1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9" containerID="b3d0cf3d6491c0c29623d8047a5aa560642c31637b9b8d4f954eb089224d880a" exitCode=0 Oct 08 16:23:49 crc kubenswrapper[4624]: I1008 16:23:49.508884 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqcgf" event={"ID":"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9","Type":"ContainerDied","Data":"b3d0cf3d6491c0c29623d8047a5aa560642c31637b9b8d4f954eb089224d880a"} Oct 08 16:23:49 crc kubenswrapper[4624]: I1008 16:23:49.508930 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqcgf" event={"ID":"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9","Type":"ContainerStarted","Data":"879deaf45ed1e1af4de5597ac05d80b85fca2ccaad03abd6e4657b8915ef583d"} Oct 08 16:23:50 crc kubenswrapper[4624]: I1008 16:23:50.521407 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqcgf" event={"ID":"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9","Type":"ContainerStarted","Data":"24ab2179accb084a2c32296c067a1e38c511921d8b681fe7f835a43a1d8b905c"} Oct 08 16:23:52 crc kubenswrapper[4624]: I1008 16:23:52.549898 4624 generic.go:334] "Generic (PLEG): container finished" podID="1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9" containerID="24ab2179accb084a2c32296c067a1e38c511921d8b681fe7f835a43a1d8b905c" exitCode=0 Oct 08 16:23:52 crc kubenswrapper[4624]: I1008 16:23:52.550187 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqcgf" event={"ID":"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9","Type":"ContainerDied","Data":"24ab2179accb084a2c32296c067a1e38c511921d8b681fe7f835a43a1d8b905c"} Oct 08 16:23:53 crc kubenswrapper[4624]: I1008 16:23:53.578531 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqcgf" event={"ID":"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9","Type":"ContainerStarted","Data":"07d3f9f9e618f0b080ccec9858c2b336871a6e1901ea8fe5c8bddaaad14abcc4"} Oct 08 16:23:53 crc kubenswrapper[4624]: I1008 16:23:53.644729 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lqcgf" podStartSLOduration=3.0586505920000002 podStartE2EDuration="6.644681395s" podCreationTimestamp="2025-10-08 16:23:47 +0000 UTC" firstStartedPulling="2025-10-08 16:23:49.512462946 +0000 UTC m=+7254.663398023" lastFinishedPulling="2025-10-08 16:23:53.098493749 +0000 UTC m=+7258.249428826" observedRunningTime="2025-10-08 16:23:53.622248949 +0000 UTC m=+7258.773184046" watchObservedRunningTime="2025-10-08 16:23:53.644681395 +0000 UTC m=+7258.795616492" Oct 08 16:23:57 crc kubenswrapper[4624]: I1008 16:23:57.686428 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lqcgf" Oct 08 16:23:57 crc kubenswrapper[4624]: I1008 16:23:57.686804 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lqcgf" Oct 08 16:23:58 crc kubenswrapper[4624]: I1008 16:23:58.466266 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:23:58 crc kubenswrapper[4624]: E1008 16:23:58.466951 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:23:58 crc kubenswrapper[4624]: I1008 16:23:58.741490 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-lqcgf" podUID="1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9" containerName="registry-server" probeResult="failure" output=< Oct 08 16:23:58 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:23:58 crc kubenswrapper[4624]: > Oct 08 16:24:07 crc kubenswrapper[4624]: I1008 16:24:07.753159 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lqcgf" Oct 08 16:24:07 crc kubenswrapper[4624]: I1008 16:24:07.817942 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lqcgf" Oct 08 16:24:08 crc kubenswrapper[4624]: I1008 16:24:08.012315 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lqcgf"] Oct 08 16:24:09 crc kubenswrapper[4624]: I1008 16:24:09.747733 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lqcgf" podUID="1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9" containerName="registry-server" containerID="cri-o://07d3f9f9e618f0b080ccec9858c2b336871a6e1901ea8fe5c8bddaaad14abcc4" gracePeriod=2 Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.224505 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lqcgf" Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.253807 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9-catalog-content\") pod \"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9\" (UID: \"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9\") " Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.253976 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sgdj\" (UniqueName: \"kubernetes.io/projected/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9-kube-api-access-7sgdj\") pod \"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9\" (UID: \"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9\") " Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.254051 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9-utilities\") pod \"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9\" (UID: \"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9\") " Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.255361 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9-utilities" (OuterVolumeSpecName: "utilities") pod "1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9" (UID: "1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.277696 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9-kube-api-access-7sgdj" (OuterVolumeSpecName: "kube-api-access-7sgdj") pod "1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9" (UID: "1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9"). InnerVolumeSpecName "kube-api-access-7sgdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.338153 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9" (UID: "1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.356288 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sgdj\" (UniqueName: \"kubernetes.io/projected/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9-kube-api-access-7sgdj\") on node \"crc\" DevicePath \"\"" Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.356326 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.356336 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.761567 4624 generic.go:334] "Generic (PLEG): container finished" podID="1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9" containerID="07d3f9f9e618f0b080ccec9858c2b336871a6e1901ea8fe5c8bddaaad14abcc4" exitCode=0 Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.761619 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqcgf" event={"ID":"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9","Type":"ContainerDied","Data":"07d3f9f9e618f0b080ccec9858c2b336871a6e1901ea8fe5c8bddaaad14abcc4"} Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.761681 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqcgf" event={"ID":"1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9","Type":"ContainerDied","Data":"879deaf45ed1e1af4de5597ac05d80b85fca2ccaad03abd6e4657b8915ef583d"} Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.761676 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lqcgf" Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.761701 4624 scope.go:117] "RemoveContainer" containerID="07d3f9f9e618f0b080ccec9858c2b336871a6e1901ea8fe5c8bddaaad14abcc4" Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.799038 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lqcgf"] Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.804769 4624 scope.go:117] "RemoveContainer" containerID="24ab2179accb084a2c32296c067a1e38c511921d8b681fe7f835a43a1d8b905c" Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.810023 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lqcgf"] Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.846045 4624 scope.go:117] "RemoveContainer" containerID="b3d0cf3d6491c0c29623d8047a5aa560642c31637b9b8d4f954eb089224d880a" Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.895431 4624 scope.go:117] "RemoveContainer" containerID="07d3f9f9e618f0b080ccec9858c2b336871a6e1901ea8fe5c8bddaaad14abcc4" Oct 08 16:24:10 crc kubenswrapper[4624]: E1008 16:24:10.895997 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07d3f9f9e618f0b080ccec9858c2b336871a6e1901ea8fe5c8bddaaad14abcc4\": container with ID starting with 07d3f9f9e618f0b080ccec9858c2b336871a6e1901ea8fe5c8bddaaad14abcc4 not found: ID does not exist" containerID="07d3f9f9e618f0b080ccec9858c2b336871a6e1901ea8fe5c8bddaaad14abcc4" Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.896032 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d3f9f9e618f0b080ccec9858c2b336871a6e1901ea8fe5c8bddaaad14abcc4"} err="failed to get container status \"07d3f9f9e618f0b080ccec9858c2b336871a6e1901ea8fe5c8bddaaad14abcc4\": rpc error: code = NotFound desc = could not find container \"07d3f9f9e618f0b080ccec9858c2b336871a6e1901ea8fe5c8bddaaad14abcc4\": container with ID starting with 07d3f9f9e618f0b080ccec9858c2b336871a6e1901ea8fe5c8bddaaad14abcc4 not found: ID does not exist" Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.896060 4624 scope.go:117] "RemoveContainer" containerID="24ab2179accb084a2c32296c067a1e38c511921d8b681fe7f835a43a1d8b905c" Oct 08 16:24:10 crc kubenswrapper[4624]: E1008 16:24:10.896422 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24ab2179accb084a2c32296c067a1e38c511921d8b681fe7f835a43a1d8b905c\": container with ID starting with 24ab2179accb084a2c32296c067a1e38c511921d8b681fe7f835a43a1d8b905c not found: ID does not exist" containerID="24ab2179accb084a2c32296c067a1e38c511921d8b681fe7f835a43a1d8b905c" Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.896451 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24ab2179accb084a2c32296c067a1e38c511921d8b681fe7f835a43a1d8b905c"} err="failed to get container status \"24ab2179accb084a2c32296c067a1e38c511921d8b681fe7f835a43a1d8b905c\": rpc error: code = NotFound desc = could not find container \"24ab2179accb084a2c32296c067a1e38c511921d8b681fe7f835a43a1d8b905c\": container with ID starting with 24ab2179accb084a2c32296c067a1e38c511921d8b681fe7f835a43a1d8b905c not found: ID does not exist" Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.896490 4624 scope.go:117] "RemoveContainer" containerID="b3d0cf3d6491c0c29623d8047a5aa560642c31637b9b8d4f954eb089224d880a" Oct 08 16:24:10 crc kubenswrapper[4624]: E1008 16:24:10.897337 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3d0cf3d6491c0c29623d8047a5aa560642c31637b9b8d4f954eb089224d880a\": container with ID starting with b3d0cf3d6491c0c29623d8047a5aa560642c31637b9b8d4f954eb089224d880a not found: ID does not exist" containerID="b3d0cf3d6491c0c29623d8047a5aa560642c31637b9b8d4f954eb089224d880a" Oct 08 16:24:10 crc kubenswrapper[4624]: I1008 16:24:10.897362 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3d0cf3d6491c0c29623d8047a5aa560642c31637b9b8d4f954eb089224d880a"} err="failed to get container status \"b3d0cf3d6491c0c29623d8047a5aa560642c31637b9b8d4f954eb089224d880a\": rpc error: code = NotFound desc = could not find container \"b3d0cf3d6491c0c29623d8047a5aa560642c31637b9b8d4f954eb089224d880a\": container with ID starting with b3d0cf3d6491c0c29623d8047a5aa560642c31637b9b8d4f954eb089224d880a not found: ID does not exist" Oct 08 16:24:11 crc kubenswrapper[4624]: I1008 16:24:11.466928 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:24:11 crc kubenswrapper[4624]: E1008 16:24:11.467275 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:24:11 crc kubenswrapper[4624]: I1008 16:24:11.483800 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9" path="/var/lib/kubelet/pods/1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9/volumes" Oct 08 16:24:23 crc kubenswrapper[4624]: I1008 16:24:23.465886 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:24:23 crc kubenswrapper[4624]: E1008 16:24:23.466649 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:24:35 crc kubenswrapper[4624]: I1008 16:24:35.476018 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:24:35 crc kubenswrapper[4624]: E1008 16:24:35.476916 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:24:46 crc kubenswrapper[4624]: I1008 16:24:46.472576 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:24:46 crc kubenswrapper[4624]: E1008 16:24:46.482301 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:24:59 crc kubenswrapper[4624]: I1008 16:24:59.466185 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:24:59 crc kubenswrapper[4624]: E1008 16:24:59.466992 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:25:13 crc kubenswrapper[4624]: I1008 16:25:13.466416 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:25:13 crc kubenswrapper[4624]: E1008 16:25:13.467554 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:25:28 crc kubenswrapper[4624]: I1008 16:25:28.466962 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:25:28 crc kubenswrapper[4624]: E1008 16:25:28.468920 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:25:39 crc kubenswrapper[4624]: I1008 16:25:39.482676 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:25:39 crc kubenswrapper[4624]: E1008 16:25:39.483624 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:25:50 crc kubenswrapper[4624]: I1008 16:25:50.467008 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:25:50 crc kubenswrapper[4624]: E1008 16:25:50.467962 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:26:04 crc kubenswrapper[4624]: I1008 16:26:04.466978 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:26:04 crc kubenswrapper[4624]: E1008 16:26:04.467984 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:26:17 crc kubenswrapper[4624]: I1008 16:26:17.465569 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:26:17 crc kubenswrapper[4624]: E1008 16:26:17.466405 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:26:29 crc kubenswrapper[4624]: I1008 16:26:29.465931 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:26:29 crc kubenswrapper[4624]: E1008 16:26:29.466841 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:26:41 crc kubenswrapper[4624]: I1008 16:26:41.467572 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:26:42 crc kubenswrapper[4624]: I1008 16:26:42.379143 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"9f7d21693ac4502c9409d8eb4e0331e0ba5bbf34cb00428101e46796819924a0"} Oct 08 16:27:16 crc kubenswrapper[4624]: E1008 16:27:16.025565 4624 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.154:42686->38.102.83.154:39627: write tcp 38.102.83.154:42686->38.102.83.154:39627: write: broken pipe Oct 08 16:27:41 crc kubenswrapper[4624]: I1008 16:27:41.363197 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qwf22"] Oct 08 16:27:41 crc kubenswrapper[4624]: E1008 16:27:41.364365 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9" containerName="extract-utilities" Oct 08 16:27:41 crc kubenswrapper[4624]: I1008 16:27:41.364384 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9" containerName="extract-utilities" Oct 08 16:27:41 crc kubenswrapper[4624]: E1008 16:27:41.364405 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9" containerName="extract-content" Oct 08 16:27:41 crc kubenswrapper[4624]: I1008 16:27:41.364414 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9" containerName="extract-content" Oct 08 16:27:41 crc kubenswrapper[4624]: E1008 16:27:41.364438 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9" containerName="registry-server" Oct 08 16:27:41 crc kubenswrapper[4624]: I1008 16:27:41.364445 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9" containerName="registry-server" Oct 08 16:27:41 crc kubenswrapper[4624]: I1008 16:27:41.364725 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b1f7d19-dce8-44e0-bc6a-21f61d0bbdb9" containerName="registry-server" Oct 08 16:27:41 crc kubenswrapper[4624]: I1008 16:27:41.366888 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qwf22" Oct 08 16:27:41 crc kubenswrapper[4624]: I1008 16:27:41.378069 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qwf22"] Oct 08 16:27:41 crc kubenswrapper[4624]: I1008 16:27:41.498962 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvxjp\" (UniqueName: \"kubernetes.io/projected/a41db9c6-21ff-4d83-8711-c1df20ebf15a-kube-api-access-zvxjp\") pod \"redhat-operators-qwf22\" (UID: \"a41db9c6-21ff-4d83-8711-c1df20ebf15a\") " pod="openshift-marketplace/redhat-operators-qwf22" Oct 08 16:27:41 crc kubenswrapper[4624]: I1008 16:27:41.499028 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a41db9c6-21ff-4d83-8711-c1df20ebf15a-catalog-content\") pod \"redhat-operators-qwf22\" (UID: \"a41db9c6-21ff-4d83-8711-c1df20ebf15a\") " pod="openshift-marketplace/redhat-operators-qwf22" Oct 08 16:27:41 crc kubenswrapper[4624]: I1008 16:27:41.499075 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a41db9c6-21ff-4d83-8711-c1df20ebf15a-utilities\") pod \"redhat-operators-qwf22\" (UID: \"a41db9c6-21ff-4d83-8711-c1df20ebf15a\") " pod="openshift-marketplace/redhat-operators-qwf22" Oct 08 16:27:41 crc kubenswrapper[4624]: I1008 16:27:41.602058 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvxjp\" (UniqueName: \"kubernetes.io/projected/a41db9c6-21ff-4d83-8711-c1df20ebf15a-kube-api-access-zvxjp\") pod \"redhat-operators-qwf22\" (UID: \"a41db9c6-21ff-4d83-8711-c1df20ebf15a\") " pod="openshift-marketplace/redhat-operators-qwf22" Oct 08 16:27:41 crc kubenswrapper[4624]: I1008 16:27:41.602118 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a41db9c6-21ff-4d83-8711-c1df20ebf15a-catalog-content\") pod \"redhat-operators-qwf22\" (UID: \"a41db9c6-21ff-4d83-8711-c1df20ebf15a\") " pod="openshift-marketplace/redhat-operators-qwf22" Oct 08 16:27:41 crc kubenswrapper[4624]: I1008 16:27:41.602166 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a41db9c6-21ff-4d83-8711-c1df20ebf15a-utilities\") pod \"redhat-operators-qwf22\" (UID: \"a41db9c6-21ff-4d83-8711-c1df20ebf15a\") " pod="openshift-marketplace/redhat-operators-qwf22" Oct 08 16:27:41 crc kubenswrapper[4624]: I1008 16:27:41.602792 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a41db9c6-21ff-4d83-8711-c1df20ebf15a-utilities\") pod \"redhat-operators-qwf22\" (UID: \"a41db9c6-21ff-4d83-8711-c1df20ebf15a\") " pod="openshift-marketplace/redhat-operators-qwf22" Oct 08 16:27:41 crc kubenswrapper[4624]: I1008 16:27:41.602796 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a41db9c6-21ff-4d83-8711-c1df20ebf15a-catalog-content\") pod \"redhat-operators-qwf22\" (UID: \"a41db9c6-21ff-4d83-8711-c1df20ebf15a\") " pod="openshift-marketplace/redhat-operators-qwf22" Oct 08 16:27:41 crc kubenswrapper[4624]: I1008 16:27:41.624768 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvxjp\" (UniqueName: \"kubernetes.io/projected/a41db9c6-21ff-4d83-8711-c1df20ebf15a-kube-api-access-zvxjp\") pod \"redhat-operators-qwf22\" (UID: \"a41db9c6-21ff-4d83-8711-c1df20ebf15a\") " pod="openshift-marketplace/redhat-operators-qwf22" Oct 08 16:27:41 crc kubenswrapper[4624]: I1008 16:27:41.700476 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qwf22" Oct 08 16:27:42 crc kubenswrapper[4624]: I1008 16:27:42.258825 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qwf22"] Oct 08 16:27:43 crc kubenswrapper[4624]: I1008 16:27:43.012909 4624 generic.go:334] "Generic (PLEG): container finished" podID="a41db9c6-21ff-4d83-8711-c1df20ebf15a" containerID="dde0a12e5875ad2a0f7ebacc32149ae26cdfc7d345ec16115c4249e4eaf6eb17" exitCode=0 Oct 08 16:27:43 crc kubenswrapper[4624]: I1008 16:27:43.012976 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qwf22" event={"ID":"a41db9c6-21ff-4d83-8711-c1df20ebf15a","Type":"ContainerDied","Data":"dde0a12e5875ad2a0f7ebacc32149ae26cdfc7d345ec16115c4249e4eaf6eb17"} Oct 08 16:27:43 crc kubenswrapper[4624]: I1008 16:27:43.013267 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qwf22" event={"ID":"a41db9c6-21ff-4d83-8711-c1df20ebf15a","Type":"ContainerStarted","Data":"012c154cc0229e2bfaa76baf6fba223e2c8d70fa0d79c1176154653197566b19"} Oct 08 16:27:43 crc kubenswrapper[4624]: I1008 16:27:43.026902 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 16:27:44 crc kubenswrapper[4624]: I1008 16:27:44.025334 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qwf22" event={"ID":"a41db9c6-21ff-4d83-8711-c1df20ebf15a","Type":"ContainerStarted","Data":"e2802dd17d3c3ff9144735275b15c4e71bad5c40f01d69cf8a7707514d2bd3c5"} Oct 08 16:27:48 crc kubenswrapper[4624]: I1008 16:27:48.068750 4624 generic.go:334] "Generic (PLEG): container finished" podID="a41db9c6-21ff-4d83-8711-c1df20ebf15a" containerID="e2802dd17d3c3ff9144735275b15c4e71bad5c40f01d69cf8a7707514d2bd3c5" exitCode=0 Oct 08 16:27:48 crc kubenswrapper[4624]: I1008 16:27:48.068806 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qwf22" event={"ID":"a41db9c6-21ff-4d83-8711-c1df20ebf15a","Type":"ContainerDied","Data":"e2802dd17d3c3ff9144735275b15c4e71bad5c40f01d69cf8a7707514d2bd3c5"} Oct 08 16:27:49 crc kubenswrapper[4624]: I1008 16:27:49.081760 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qwf22" event={"ID":"a41db9c6-21ff-4d83-8711-c1df20ebf15a","Type":"ContainerStarted","Data":"0e3170800f55ef139ecd6f5c00c8b7d184c735606c3f0a7795694aa07c1efb04"} Oct 08 16:27:51 crc kubenswrapper[4624]: I1008 16:27:51.701561 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qwf22" Oct 08 16:27:51 crc kubenswrapper[4624]: I1008 16:27:51.703803 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qwf22" Oct 08 16:27:52 crc kubenswrapper[4624]: I1008 16:27:52.763087 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qwf22" podUID="a41db9c6-21ff-4d83-8711-c1df20ebf15a" containerName="registry-server" probeResult="failure" output=< Oct 08 16:27:52 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:27:52 crc kubenswrapper[4624]: > Oct 08 16:28:02 crc kubenswrapper[4624]: I1008 16:28:02.768270 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qwf22" podUID="a41db9c6-21ff-4d83-8711-c1df20ebf15a" containerName="registry-server" probeResult="failure" output=< Oct 08 16:28:02 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:28:02 crc kubenswrapper[4624]: > Oct 08 16:28:12 crc kubenswrapper[4624]: I1008 16:28:12.760054 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qwf22" podUID="a41db9c6-21ff-4d83-8711-c1df20ebf15a" containerName="registry-server" probeResult="failure" output=< Oct 08 16:28:12 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:28:12 crc kubenswrapper[4624]: > Oct 08 16:28:21 crc kubenswrapper[4624]: I1008 16:28:21.753357 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qwf22" Oct 08 16:28:21 crc kubenswrapper[4624]: I1008 16:28:21.773397 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qwf22" podStartSLOduration=35.003811453 podStartE2EDuration="40.773375264s" podCreationTimestamp="2025-10-08 16:27:41 +0000 UTC" firstStartedPulling="2025-10-08 16:27:43.016743902 +0000 UTC m=+7488.167678979" lastFinishedPulling="2025-10-08 16:27:48.786307713 +0000 UTC m=+7493.937242790" observedRunningTime="2025-10-08 16:27:49.110661817 +0000 UTC m=+7494.261596894" watchObservedRunningTime="2025-10-08 16:28:21.773375264 +0000 UTC m=+7526.924310341" Oct 08 16:28:21 crc kubenswrapper[4624]: I1008 16:28:21.812995 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qwf22" Oct 08 16:28:21 crc kubenswrapper[4624]: I1008 16:28:21.991065 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qwf22"] Oct 08 16:28:23 crc kubenswrapper[4624]: I1008 16:28:23.411080 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qwf22" podUID="a41db9c6-21ff-4d83-8711-c1df20ebf15a" containerName="registry-server" containerID="cri-o://0e3170800f55ef139ecd6f5c00c8b7d184c735606c3f0a7795694aa07c1efb04" gracePeriod=2 Oct 08 16:28:24 crc kubenswrapper[4624]: I1008 16:28:24.427242 4624 generic.go:334] "Generic (PLEG): container finished" podID="a41db9c6-21ff-4d83-8711-c1df20ebf15a" containerID="0e3170800f55ef139ecd6f5c00c8b7d184c735606c3f0a7795694aa07c1efb04" exitCode=0 Oct 08 16:28:24 crc kubenswrapper[4624]: I1008 16:28:24.429158 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qwf22" event={"ID":"a41db9c6-21ff-4d83-8711-c1df20ebf15a","Type":"ContainerDied","Data":"0e3170800f55ef139ecd6f5c00c8b7d184c735606c3f0a7795694aa07c1efb04"} Oct 08 16:28:24 crc kubenswrapper[4624]: I1008 16:28:24.429275 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qwf22" event={"ID":"a41db9c6-21ff-4d83-8711-c1df20ebf15a","Type":"ContainerDied","Data":"012c154cc0229e2bfaa76baf6fba223e2c8d70fa0d79c1176154653197566b19"} Oct 08 16:28:24 crc kubenswrapper[4624]: I1008 16:28:24.429290 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="012c154cc0229e2bfaa76baf6fba223e2c8d70fa0d79c1176154653197566b19" Oct 08 16:28:24 crc kubenswrapper[4624]: I1008 16:28:24.469932 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qwf22" Oct 08 16:28:24 crc kubenswrapper[4624]: I1008 16:28:24.582204 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvxjp\" (UniqueName: \"kubernetes.io/projected/a41db9c6-21ff-4d83-8711-c1df20ebf15a-kube-api-access-zvxjp\") pod \"a41db9c6-21ff-4d83-8711-c1df20ebf15a\" (UID: \"a41db9c6-21ff-4d83-8711-c1df20ebf15a\") " Oct 08 16:28:24 crc kubenswrapper[4624]: I1008 16:28:24.582363 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a41db9c6-21ff-4d83-8711-c1df20ebf15a-catalog-content\") pod \"a41db9c6-21ff-4d83-8711-c1df20ebf15a\" (UID: \"a41db9c6-21ff-4d83-8711-c1df20ebf15a\") " Oct 08 16:28:24 crc kubenswrapper[4624]: I1008 16:28:24.582530 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a41db9c6-21ff-4d83-8711-c1df20ebf15a-utilities\") pod \"a41db9c6-21ff-4d83-8711-c1df20ebf15a\" (UID: \"a41db9c6-21ff-4d83-8711-c1df20ebf15a\") " Oct 08 16:28:24 crc kubenswrapper[4624]: I1008 16:28:24.586160 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a41db9c6-21ff-4d83-8711-c1df20ebf15a-utilities" (OuterVolumeSpecName: "utilities") pod "a41db9c6-21ff-4d83-8711-c1df20ebf15a" (UID: "a41db9c6-21ff-4d83-8711-c1df20ebf15a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:28:24 crc kubenswrapper[4624]: I1008 16:28:24.614413 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a41db9c6-21ff-4d83-8711-c1df20ebf15a-kube-api-access-zvxjp" (OuterVolumeSpecName: "kube-api-access-zvxjp") pod "a41db9c6-21ff-4d83-8711-c1df20ebf15a" (UID: "a41db9c6-21ff-4d83-8711-c1df20ebf15a"). InnerVolumeSpecName "kube-api-access-zvxjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:28:24 crc kubenswrapper[4624]: I1008 16:28:24.679198 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a41db9c6-21ff-4d83-8711-c1df20ebf15a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a41db9c6-21ff-4d83-8711-c1df20ebf15a" (UID: "a41db9c6-21ff-4d83-8711-c1df20ebf15a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:28:24 crc kubenswrapper[4624]: I1008 16:28:24.685076 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a41db9c6-21ff-4d83-8711-c1df20ebf15a-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:28:24 crc kubenswrapper[4624]: I1008 16:28:24.685370 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvxjp\" (UniqueName: \"kubernetes.io/projected/a41db9c6-21ff-4d83-8711-c1df20ebf15a-kube-api-access-zvxjp\") on node \"crc\" DevicePath \"\"" Oct 08 16:28:24 crc kubenswrapper[4624]: I1008 16:28:24.685460 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a41db9c6-21ff-4d83-8711-c1df20ebf15a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:28:25 crc kubenswrapper[4624]: I1008 16:28:25.439250 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qwf22" Oct 08 16:28:25 crc kubenswrapper[4624]: I1008 16:28:25.487830 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qwf22"] Oct 08 16:28:25 crc kubenswrapper[4624]: I1008 16:28:25.495575 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qwf22"] Oct 08 16:28:27 crc kubenswrapper[4624]: I1008 16:28:27.478455 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a41db9c6-21ff-4d83-8711-c1df20ebf15a" path="/var/lib/kubelet/pods/a41db9c6-21ff-4d83-8711-c1df20ebf15a/volumes" Oct 08 16:29:00 crc kubenswrapper[4624]: I1008 16:29:00.076902 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:29:00 crc kubenswrapper[4624]: I1008 16:29:00.077474 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:29:30 crc kubenswrapper[4624]: I1008 16:29:30.076067 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:29:30 crc kubenswrapper[4624]: I1008 16:29:30.076586 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.076557 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.077160 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.077259 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.078111 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f7d21693ac4502c9409d8eb4e0331e0ba5bbf34cb00428101e46796819924a0"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.078162 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://9f7d21693ac4502c9409d8eb4e0331e0ba5bbf34cb00428101e46796819924a0" gracePeriod=600 Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.242754 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r"] Oct 08 16:30:00 crc kubenswrapper[4624]: E1008 16:30:00.243362 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41db9c6-21ff-4d83-8711-c1df20ebf15a" containerName="extract-utilities" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.243381 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41db9c6-21ff-4d83-8711-c1df20ebf15a" containerName="extract-utilities" Oct 08 16:30:00 crc kubenswrapper[4624]: E1008 16:30:00.243404 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41db9c6-21ff-4d83-8711-c1df20ebf15a" containerName="extract-content" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.243410 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41db9c6-21ff-4d83-8711-c1df20ebf15a" containerName="extract-content" Oct 08 16:30:00 crc kubenswrapper[4624]: E1008 16:30:00.243446 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41db9c6-21ff-4d83-8711-c1df20ebf15a" containerName="registry-server" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.243453 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41db9c6-21ff-4d83-8711-c1df20ebf15a" containerName="registry-server" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.243745 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a41db9c6-21ff-4d83-8711-c1df20ebf15a" containerName="registry-server" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.244602 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.260814 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 16:30:00 crc kubenswrapper[4624]: E1008 16:30:00.267433 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a106d69_d531_4ee4_a9ed_505988ebd24d.slice/crio-conmon-9f7d21693ac4502c9409d8eb4e0331e0ba5bbf34cb00428101e46796819924a0.scope\": RecentStats: unable to find data in memory cache]" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.269777 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r"] Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.283505 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.388096 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="9f7d21693ac4502c9409d8eb4e0331e0ba5bbf34cb00428101e46796819924a0" exitCode=0 Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.388157 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"9f7d21693ac4502c9409d8eb4e0331e0ba5bbf34cb00428101e46796819924a0"} Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.388201 4624 scope.go:117] "RemoveContainer" containerID="922b2b1ef237a6cbacff3da04203c432cf4c2420285ad3fc10595778e4ed5071" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.398478 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea647041-60ce-41ca-a1b2-872205b6f242-secret-volume\") pod \"collect-profiles-29332350-pbt6r\" (UID: \"ea647041-60ce-41ca-a1b2-872205b6f242\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.398535 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea647041-60ce-41ca-a1b2-872205b6f242-config-volume\") pod \"collect-profiles-29332350-pbt6r\" (UID: \"ea647041-60ce-41ca-a1b2-872205b6f242\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.398944 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxz6p\" (UniqueName: \"kubernetes.io/projected/ea647041-60ce-41ca-a1b2-872205b6f242-kube-api-access-dxz6p\") pod \"collect-profiles-29332350-pbt6r\" (UID: \"ea647041-60ce-41ca-a1b2-872205b6f242\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.501568 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea647041-60ce-41ca-a1b2-872205b6f242-secret-volume\") pod \"collect-profiles-29332350-pbt6r\" (UID: \"ea647041-60ce-41ca-a1b2-872205b6f242\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.501633 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea647041-60ce-41ca-a1b2-872205b6f242-config-volume\") pod \"collect-profiles-29332350-pbt6r\" (UID: \"ea647041-60ce-41ca-a1b2-872205b6f242\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.502867 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxz6p\" (UniqueName: \"kubernetes.io/projected/ea647041-60ce-41ca-a1b2-872205b6f242-kube-api-access-dxz6p\") pod \"collect-profiles-29332350-pbt6r\" (UID: \"ea647041-60ce-41ca-a1b2-872205b6f242\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.503911 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea647041-60ce-41ca-a1b2-872205b6f242-config-volume\") pod \"collect-profiles-29332350-pbt6r\" (UID: \"ea647041-60ce-41ca-a1b2-872205b6f242\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.514477 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea647041-60ce-41ca-a1b2-872205b6f242-secret-volume\") pod \"collect-profiles-29332350-pbt6r\" (UID: \"ea647041-60ce-41ca-a1b2-872205b6f242\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.523608 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxz6p\" (UniqueName: \"kubernetes.io/projected/ea647041-60ce-41ca-a1b2-872205b6f242-kube-api-access-dxz6p\") pod \"collect-profiles-29332350-pbt6r\" (UID: \"ea647041-60ce-41ca-a1b2-872205b6f242\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r" Oct 08 16:30:00 crc kubenswrapper[4624]: I1008 16:30:00.586966 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r" Oct 08 16:30:01 crc kubenswrapper[4624]: I1008 16:30:01.136629 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r"] Oct 08 16:30:01 crc kubenswrapper[4624]: W1008 16:30:01.138423 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea647041_60ce_41ca_a1b2_872205b6f242.slice/crio-e1e8bfac6a22a6a03d90d387cf3f8dcbab09a45eae7a60ec8c007006748336b4 WatchSource:0}: Error finding container e1e8bfac6a22a6a03d90d387cf3f8dcbab09a45eae7a60ec8c007006748336b4: Status 404 returned error can't find the container with id e1e8bfac6a22a6a03d90d387cf3f8dcbab09a45eae7a60ec8c007006748336b4 Oct 08 16:30:01 crc kubenswrapper[4624]: I1008 16:30:01.401828 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c"} Oct 08 16:30:01 crc kubenswrapper[4624]: I1008 16:30:01.403825 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r" event={"ID":"ea647041-60ce-41ca-a1b2-872205b6f242","Type":"ContainerStarted","Data":"2e0413c759fe30d8775017dcd9614bc9d5a7d808435391ae614a3213ecaa1cd2"} Oct 08 16:30:01 crc kubenswrapper[4624]: I1008 16:30:01.403876 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r" event={"ID":"ea647041-60ce-41ca-a1b2-872205b6f242","Type":"ContainerStarted","Data":"e1e8bfac6a22a6a03d90d387cf3f8dcbab09a45eae7a60ec8c007006748336b4"} Oct 08 16:30:01 crc kubenswrapper[4624]: I1008 16:30:01.440440 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r" podStartSLOduration=1.440418046 podStartE2EDuration="1.440418046s" podCreationTimestamp="2025-10-08 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 16:30:01.43430974 +0000 UTC m=+7626.585244817" watchObservedRunningTime="2025-10-08 16:30:01.440418046 +0000 UTC m=+7626.591353123" Oct 08 16:30:02 crc kubenswrapper[4624]: I1008 16:30:02.430179 4624 generic.go:334] "Generic (PLEG): container finished" podID="ea647041-60ce-41ca-a1b2-872205b6f242" containerID="2e0413c759fe30d8775017dcd9614bc9d5a7d808435391ae614a3213ecaa1cd2" exitCode=0 Oct 08 16:30:02 crc kubenswrapper[4624]: I1008 16:30:02.430371 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r" event={"ID":"ea647041-60ce-41ca-a1b2-872205b6f242","Type":"ContainerDied","Data":"2e0413c759fe30d8775017dcd9614bc9d5a7d808435391ae614a3213ecaa1cd2"} Oct 08 16:30:03 crc kubenswrapper[4624]: I1008 16:30:03.856992 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r" Oct 08 16:30:03 crc kubenswrapper[4624]: I1008 16:30:03.979940 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea647041-60ce-41ca-a1b2-872205b6f242-config-volume\") pod \"ea647041-60ce-41ca-a1b2-872205b6f242\" (UID: \"ea647041-60ce-41ca-a1b2-872205b6f242\") " Oct 08 16:30:03 crc kubenswrapper[4624]: I1008 16:30:03.980272 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxz6p\" (UniqueName: \"kubernetes.io/projected/ea647041-60ce-41ca-a1b2-872205b6f242-kube-api-access-dxz6p\") pod \"ea647041-60ce-41ca-a1b2-872205b6f242\" (UID: \"ea647041-60ce-41ca-a1b2-872205b6f242\") " Oct 08 16:30:03 crc kubenswrapper[4624]: I1008 16:30:03.980330 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea647041-60ce-41ca-a1b2-872205b6f242-secret-volume\") pod \"ea647041-60ce-41ca-a1b2-872205b6f242\" (UID: \"ea647041-60ce-41ca-a1b2-872205b6f242\") " Oct 08 16:30:03 crc kubenswrapper[4624]: I1008 16:30:03.981437 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea647041-60ce-41ca-a1b2-872205b6f242-config-volume" (OuterVolumeSpecName: "config-volume") pod "ea647041-60ce-41ca-a1b2-872205b6f242" (UID: "ea647041-60ce-41ca-a1b2-872205b6f242"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 16:30:03 crc kubenswrapper[4624]: I1008 16:30:03.987789 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea647041-60ce-41ca-a1b2-872205b6f242-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ea647041-60ce-41ca-a1b2-872205b6f242" (UID: "ea647041-60ce-41ca-a1b2-872205b6f242"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:30:03 crc kubenswrapper[4624]: I1008 16:30:03.987905 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea647041-60ce-41ca-a1b2-872205b6f242-kube-api-access-dxz6p" (OuterVolumeSpecName: "kube-api-access-dxz6p") pod "ea647041-60ce-41ca-a1b2-872205b6f242" (UID: "ea647041-60ce-41ca-a1b2-872205b6f242"). InnerVolumeSpecName "kube-api-access-dxz6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:30:04 crc kubenswrapper[4624]: I1008 16:30:04.082536 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea647041-60ce-41ca-a1b2-872205b6f242-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 16:30:04 crc kubenswrapper[4624]: I1008 16:30:04.082573 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxz6p\" (UniqueName: \"kubernetes.io/projected/ea647041-60ce-41ca-a1b2-872205b6f242-kube-api-access-dxz6p\") on node \"crc\" DevicePath \"\"" Oct 08 16:30:04 crc kubenswrapper[4624]: I1008 16:30:04.082583 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea647041-60ce-41ca-a1b2-872205b6f242-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 16:30:04 crc kubenswrapper[4624]: I1008 16:30:04.451579 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r" event={"ID":"ea647041-60ce-41ca-a1b2-872205b6f242","Type":"ContainerDied","Data":"e1e8bfac6a22a6a03d90d387cf3f8dcbab09a45eae7a60ec8c007006748336b4"} Oct 08 16:30:04 crc kubenswrapper[4624]: I1008 16:30:04.451617 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r" Oct 08 16:30:04 crc kubenswrapper[4624]: I1008 16:30:04.452039 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1e8bfac6a22a6a03d90d387cf3f8dcbab09a45eae7a60ec8c007006748336b4" Oct 08 16:30:04 crc kubenswrapper[4624]: I1008 16:30:04.536530 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf"] Oct 08 16:30:04 crc kubenswrapper[4624]: I1008 16:30:04.550383 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332305-b22lf"] Oct 08 16:30:05 crc kubenswrapper[4624]: I1008 16:30:05.480199 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a4d4483-9136-4f2d-8f08-c1c1fd36fe57" path="/var/lib/kubelet/pods/4a4d4483-9136-4f2d-8f08-c1c1fd36fe57/volumes" Oct 08 16:30:24 crc kubenswrapper[4624]: I1008 16:30:24.739752 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k8l4j"] Oct 08 16:30:24 crc kubenswrapper[4624]: E1008 16:30:24.740914 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea647041-60ce-41ca-a1b2-872205b6f242" containerName="collect-profiles" Oct 08 16:30:24 crc kubenswrapper[4624]: I1008 16:30:24.740940 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea647041-60ce-41ca-a1b2-872205b6f242" containerName="collect-profiles" Oct 08 16:30:24 crc kubenswrapper[4624]: I1008 16:30:24.741265 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea647041-60ce-41ca-a1b2-872205b6f242" containerName="collect-profiles" Oct 08 16:30:24 crc kubenswrapper[4624]: I1008 16:30:24.743308 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8l4j" Oct 08 16:30:24 crc kubenswrapper[4624]: I1008 16:30:24.772519 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8l4j"] Oct 08 16:30:24 crc kubenswrapper[4624]: I1008 16:30:24.820483 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6772dc75-57f4-4b68-9701-b5070212b07e-catalog-content\") pod \"redhat-marketplace-k8l4j\" (UID: \"6772dc75-57f4-4b68-9701-b5070212b07e\") " pod="openshift-marketplace/redhat-marketplace-k8l4j" Oct 08 16:30:24 crc kubenswrapper[4624]: I1008 16:30:24.820634 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6772dc75-57f4-4b68-9701-b5070212b07e-utilities\") pod \"redhat-marketplace-k8l4j\" (UID: \"6772dc75-57f4-4b68-9701-b5070212b07e\") " pod="openshift-marketplace/redhat-marketplace-k8l4j" Oct 08 16:30:24 crc kubenswrapper[4624]: I1008 16:30:24.820698 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlxl7\" (UniqueName: \"kubernetes.io/projected/6772dc75-57f4-4b68-9701-b5070212b07e-kube-api-access-wlxl7\") pod \"redhat-marketplace-k8l4j\" (UID: \"6772dc75-57f4-4b68-9701-b5070212b07e\") " pod="openshift-marketplace/redhat-marketplace-k8l4j" Oct 08 16:30:24 crc kubenswrapper[4624]: I1008 16:30:24.922913 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6772dc75-57f4-4b68-9701-b5070212b07e-catalog-content\") pod \"redhat-marketplace-k8l4j\" (UID: \"6772dc75-57f4-4b68-9701-b5070212b07e\") " pod="openshift-marketplace/redhat-marketplace-k8l4j" Oct 08 16:30:24 crc kubenswrapper[4624]: I1008 16:30:24.923396 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6772dc75-57f4-4b68-9701-b5070212b07e-catalog-content\") pod \"redhat-marketplace-k8l4j\" (UID: \"6772dc75-57f4-4b68-9701-b5070212b07e\") " pod="openshift-marketplace/redhat-marketplace-k8l4j" Oct 08 16:30:24 crc kubenswrapper[4624]: I1008 16:30:24.923571 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6772dc75-57f4-4b68-9701-b5070212b07e-utilities\") pod \"redhat-marketplace-k8l4j\" (UID: \"6772dc75-57f4-4b68-9701-b5070212b07e\") " pod="openshift-marketplace/redhat-marketplace-k8l4j" Oct 08 16:30:24 crc kubenswrapper[4624]: I1008 16:30:24.923649 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlxl7\" (UniqueName: \"kubernetes.io/projected/6772dc75-57f4-4b68-9701-b5070212b07e-kube-api-access-wlxl7\") pod \"redhat-marketplace-k8l4j\" (UID: \"6772dc75-57f4-4b68-9701-b5070212b07e\") " pod="openshift-marketplace/redhat-marketplace-k8l4j" Oct 08 16:30:24 crc kubenswrapper[4624]: I1008 16:30:24.924344 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6772dc75-57f4-4b68-9701-b5070212b07e-utilities\") pod \"redhat-marketplace-k8l4j\" (UID: \"6772dc75-57f4-4b68-9701-b5070212b07e\") " pod="openshift-marketplace/redhat-marketplace-k8l4j" Oct 08 16:30:24 crc kubenswrapper[4624]: I1008 16:30:24.947502 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlxl7\" (UniqueName: \"kubernetes.io/projected/6772dc75-57f4-4b68-9701-b5070212b07e-kube-api-access-wlxl7\") pod \"redhat-marketplace-k8l4j\" (UID: \"6772dc75-57f4-4b68-9701-b5070212b07e\") " pod="openshift-marketplace/redhat-marketplace-k8l4j" Oct 08 16:30:25 crc kubenswrapper[4624]: I1008 16:30:25.076726 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8l4j" Oct 08 16:30:25 crc kubenswrapper[4624]: I1008 16:30:25.691482 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8l4j"] Oct 08 16:30:26 crc kubenswrapper[4624]: I1008 16:30:26.673430 4624 generic.go:334] "Generic (PLEG): container finished" podID="6772dc75-57f4-4b68-9701-b5070212b07e" containerID="06aa023c2df47ef3b814a37697e659006265d1ed7247114ae684fddf83134f66" exitCode=0 Oct 08 16:30:26 crc kubenswrapper[4624]: I1008 16:30:26.673685 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8l4j" event={"ID":"6772dc75-57f4-4b68-9701-b5070212b07e","Type":"ContainerDied","Data":"06aa023c2df47ef3b814a37697e659006265d1ed7247114ae684fddf83134f66"} Oct 08 16:30:26 crc kubenswrapper[4624]: I1008 16:30:26.673955 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8l4j" event={"ID":"6772dc75-57f4-4b68-9701-b5070212b07e","Type":"ContainerStarted","Data":"5e1c7a0581a7add085c331b342df43783013105bcf774f5d4477da9f292c9721"} Oct 08 16:30:28 crc kubenswrapper[4624]: I1008 16:30:28.696946 4624 generic.go:334] "Generic (PLEG): container finished" podID="6772dc75-57f4-4b68-9701-b5070212b07e" containerID="6c0848002718f295612ab92838407250861a90127b7955432b95825258c9465f" exitCode=0 Oct 08 16:30:28 crc kubenswrapper[4624]: I1008 16:30:28.697025 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8l4j" event={"ID":"6772dc75-57f4-4b68-9701-b5070212b07e","Type":"ContainerDied","Data":"6c0848002718f295612ab92838407250861a90127b7955432b95825258c9465f"} Oct 08 16:30:30 crc kubenswrapper[4624]: I1008 16:30:30.729057 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8l4j" event={"ID":"6772dc75-57f4-4b68-9701-b5070212b07e","Type":"ContainerStarted","Data":"8128f2ab74daa2eba62b311c6b1ca148ca91a211bd4d16f877d364eed82850c2"} Oct 08 16:30:30 crc kubenswrapper[4624]: I1008 16:30:30.756374 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k8l4j" podStartSLOduration=3.783403202 podStartE2EDuration="6.756350663s" podCreationTimestamp="2025-10-08 16:30:24 +0000 UTC" firstStartedPulling="2025-10-08 16:30:26.676755374 +0000 UTC m=+7651.827690451" lastFinishedPulling="2025-10-08 16:30:29.649702835 +0000 UTC m=+7654.800637912" observedRunningTime="2025-10-08 16:30:30.754419144 +0000 UTC m=+7655.905354221" watchObservedRunningTime="2025-10-08 16:30:30.756350663 +0000 UTC m=+7655.907285740" Oct 08 16:30:35 crc kubenswrapper[4624]: I1008 16:30:35.076937 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k8l4j" Oct 08 16:30:35 crc kubenswrapper[4624]: I1008 16:30:35.078752 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k8l4j" Oct 08 16:30:35 crc kubenswrapper[4624]: I1008 16:30:35.164535 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k8l4j" Oct 08 16:30:35 crc kubenswrapper[4624]: I1008 16:30:35.834978 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k8l4j" Oct 08 16:30:35 crc kubenswrapper[4624]: I1008 16:30:35.900354 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8l4j"] Oct 08 16:30:37 crc kubenswrapper[4624]: I1008 16:30:37.794783 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k8l4j" podUID="6772dc75-57f4-4b68-9701-b5070212b07e" containerName="registry-server" containerID="cri-o://8128f2ab74daa2eba62b311c6b1ca148ca91a211bd4d16f877d364eed82850c2" gracePeriod=2 Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.326115 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8l4j" Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.445545 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6772dc75-57f4-4b68-9701-b5070212b07e-utilities\") pod \"6772dc75-57f4-4b68-9701-b5070212b07e\" (UID: \"6772dc75-57f4-4b68-9701-b5070212b07e\") " Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.446082 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlxl7\" (UniqueName: \"kubernetes.io/projected/6772dc75-57f4-4b68-9701-b5070212b07e-kube-api-access-wlxl7\") pod \"6772dc75-57f4-4b68-9701-b5070212b07e\" (UID: \"6772dc75-57f4-4b68-9701-b5070212b07e\") " Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.446149 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6772dc75-57f4-4b68-9701-b5070212b07e-catalog-content\") pod \"6772dc75-57f4-4b68-9701-b5070212b07e\" (UID: \"6772dc75-57f4-4b68-9701-b5070212b07e\") " Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.447198 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6772dc75-57f4-4b68-9701-b5070212b07e-utilities" (OuterVolumeSpecName: "utilities") pod "6772dc75-57f4-4b68-9701-b5070212b07e" (UID: "6772dc75-57f4-4b68-9701-b5070212b07e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.454387 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6772dc75-57f4-4b68-9701-b5070212b07e-kube-api-access-wlxl7" (OuterVolumeSpecName: "kube-api-access-wlxl7") pod "6772dc75-57f4-4b68-9701-b5070212b07e" (UID: "6772dc75-57f4-4b68-9701-b5070212b07e"). InnerVolumeSpecName "kube-api-access-wlxl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.462628 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6772dc75-57f4-4b68-9701-b5070212b07e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6772dc75-57f4-4b68-9701-b5070212b07e" (UID: "6772dc75-57f4-4b68-9701-b5070212b07e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.552802 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6772dc75-57f4-4b68-9701-b5070212b07e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.552886 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6772dc75-57f4-4b68-9701-b5070212b07e-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.552902 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlxl7\" (UniqueName: \"kubernetes.io/projected/6772dc75-57f4-4b68-9701-b5070212b07e-kube-api-access-wlxl7\") on node \"crc\" DevicePath \"\"" Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.806796 4624 generic.go:334] "Generic (PLEG): container finished" podID="6772dc75-57f4-4b68-9701-b5070212b07e" containerID="8128f2ab74daa2eba62b311c6b1ca148ca91a211bd4d16f877d364eed82850c2" exitCode=0 Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.806860 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8l4j" event={"ID":"6772dc75-57f4-4b68-9701-b5070212b07e","Type":"ContainerDied","Data":"8128f2ab74daa2eba62b311c6b1ca148ca91a211bd4d16f877d364eed82850c2"} Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.806924 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8l4j" Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.806947 4624 scope.go:117] "RemoveContainer" containerID="8128f2ab74daa2eba62b311c6b1ca148ca91a211bd4d16f877d364eed82850c2" Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.806933 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8l4j" event={"ID":"6772dc75-57f4-4b68-9701-b5070212b07e","Type":"ContainerDied","Data":"5e1c7a0581a7add085c331b342df43783013105bcf774f5d4477da9f292c9721"} Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.836670 4624 scope.go:117] "RemoveContainer" containerID="6c0848002718f295612ab92838407250861a90127b7955432b95825258c9465f" Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.873965 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8l4j"] Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.883505 4624 scope.go:117] "RemoveContainer" containerID="06aa023c2df47ef3b814a37697e659006265d1ed7247114ae684fddf83134f66" Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.883985 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8l4j"] Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.928201 4624 scope.go:117] "RemoveContainer" containerID="8128f2ab74daa2eba62b311c6b1ca148ca91a211bd4d16f877d364eed82850c2" Oct 08 16:30:38 crc kubenswrapper[4624]: E1008 16:30:38.928724 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8128f2ab74daa2eba62b311c6b1ca148ca91a211bd4d16f877d364eed82850c2\": container with ID starting with 8128f2ab74daa2eba62b311c6b1ca148ca91a211bd4d16f877d364eed82850c2 not found: ID does not exist" containerID="8128f2ab74daa2eba62b311c6b1ca148ca91a211bd4d16f877d364eed82850c2" Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.928760 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8128f2ab74daa2eba62b311c6b1ca148ca91a211bd4d16f877d364eed82850c2"} err="failed to get container status \"8128f2ab74daa2eba62b311c6b1ca148ca91a211bd4d16f877d364eed82850c2\": rpc error: code = NotFound desc = could not find container \"8128f2ab74daa2eba62b311c6b1ca148ca91a211bd4d16f877d364eed82850c2\": container with ID starting with 8128f2ab74daa2eba62b311c6b1ca148ca91a211bd4d16f877d364eed82850c2 not found: ID does not exist" Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.928788 4624 scope.go:117] "RemoveContainer" containerID="6c0848002718f295612ab92838407250861a90127b7955432b95825258c9465f" Oct 08 16:30:38 crc kubenswrapper[4624]: E1008 16:30:38.929377 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c0848002718f295612ab92838407250861a90127b7955432b95825258c9465f\": container with ID starting with 6c0848002718f295612ab92838407250861a90127b7955432b95825258c9465f not found: ID does not exist" containerID="6c0848002718f295612ab92838407250861a90127b7955432b95825258c9465f" Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.929406 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c0848002718f295612ab92838407250861a90127b7955432b95825258c9465f"} err="failed to get container status \"6c0848002718f295612ab92838407250861a90127b7955432b95825258c9465f\": rpc error: code = NotFound desc = could not find container \"6c0848002718f295612ab92838407250861a90127b7955432b95825258c9465f\": container with ID starting with 6c0848002718f295612ab92838407250861a90127b7955432b95825258c9465f not found: ID does not exist" Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.929421 4624 scope.go:117] "RemoveContainer" containerID="06aa023c2df47ef3b814a37697e659006265d1ed7247114ae684fddf83134f66" Oct 08 16:30:38 crc kubenswrapper[4624]: E1008 16:30:38.929798 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06aa023c2df47ef3b814a37697e659006265d1ed7247114ae684fddf83134f66\": container with ID starting with 06aa023c2df47ef3b814a37697e659006265d1ed7247114ae684fddf83134f66 not found: ID does not exist" containerID="06aa023c2df47ef3b814a37697e659006265d1ed7247114ae684fddf83134f66" Oct 08 16:30:38 crc kubenswrapper[4624]: I1008 16:30:38.929823 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06aa023c2df47ef3b814a37697e659006265d1ed7247114ae684fddf83134f66"} err="failed to get container status \"06aa023c2df47ef3b814a37697e659006265d1ed7247114ae684fddf83134f66\": rpc error: code = NotFound desc = could not find container \"06aa023c2df47ef3b814a37697e659006265d1ed7247114ae684fddf83134f66\": container with ID starting with 06aa023c2df47ef3b814a37697e659006265d1ed7247114ae684fddf83134f66 not found: ID does not exist" Oct 08 16:30:39 crc kubenswrapper[4624]: I1008 16:30:39.477323 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6772dc75-57f4-4b68-9701-b5070212b07e" path="/var/lib/kubelet/pods/6772dc75-57f4-4b68-9701-b5070212b07e/volumes" Oct 08 16:30:46 crc kubenswrapper[4624]: I1008 16:30:46.299188 4624 scope.go:117] "RemoveContainer" containerID="fc861269816710172b0e784fd14e3cc9c4f869fbab5aa0c029f0f5a42e7224d0" Oct 08 16:32:00 crc kubenswrapper[4624]: I1008 16:32:00.076758 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:32:00 crc kubenswrapper[4624]: I1008 16:32:00.077297 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:32:30 crc kubenswrapper[4624]: I1008 16:32:30.076259 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:32:30 crc kubenswrapper[4624]: I1008 16:32:30.077171 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:33:00 crc kubenswrapper[4624]: I1008 16:33:00.076613 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:33:00 crc kubenswrapper[4624]: I1008 16:33:00.077429 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:33:00 crc kubenswrapper[4624]: I1008 16:33:00.077493 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 16:33:00 crc kubenswrapper[4624]: I1008 16:33:00.078331 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 16:33:00 crc kubenswrapper[4624]: I1008 16:33:00.078398 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" gracePeriod=600 Oct 08 16:33:00 crc kubenswrapper[4624]: E1008 16:33:00.209843 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:33:01 crc kubenswrapper[4624]: I1008 16:33:01.170058 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" exitCode=0 Oct 08 16:33:01 crc kubenswrapper[4624]: I1008 16:33:01.170161 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c"} Oct 08 16:33:01 crc kubenswrapper[4624]: I1008 16:33:01.170381 4624 scope.go:117] "RemoveContainer" containerID="9f7d21693ac4502c9409d8eb4e0331e0ba5bbf34cb00428101e46796819924a0" Oct 08 16:33:01 crc kubenswrapper[4624]: I1008 16:33:01.171127 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:33:01 crc kubenswrapper[4624]: E1008 16:33:01.171385 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:33:14 crc kubenswrapper[4624]: I1008 16:33:14.465839 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:33:14 crc kubenswrapper[4624]: E1008 16:33:14.467669 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:33:27 crc kubenswrapper[4624]: I1008 16:33:27.467909 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:33:27 crc kubenswrapper[4624]: E1008 16:33:27.468973 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:33:41 crc kubenswrapper[4624]: I1008 16:33:41.467158 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:33:41 crc kubenswrapper[4624]: E1008 16:33:41.467894 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:33:46 crc kubenswrapper[4624]: I1008 16:33:46.433916 4624 scope.go:117] "RemoveContainer" containerID="e2802dd17d3c3ff9144735275b15c4e71bad5c40f01d69cf8a7707514d2bd3c5" Oct 08 16:33:46 crc kubenswrapper[4624]: I1008 16:33:46.464575 4624 scope.go:117] "RemoveContainer" containerID="dde0a12e5875ad2a0f7ebacc32149ae26cdfc7d345ec16115c4249e4eaf6eb17" Oct 08 16:33:55 crc kubenswrapper[4624]: I1008 16:33:55.475143 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:33:55 crc kubenswrapper[4624]: E1008 16:33:55.476009 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.130107 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kz79z"] Oct 08 16:33:59 crc kubenswrapper[4624]: E1008 16:33:59.131264 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6772dc75-57f4-4b68-9701-b5070212b07e" containerName="extract-utilities" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.131282 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6772dc75-57f4-4b68-9701-b5070212b07e" containerName="extract-utilities" Oct 08 16:33:59 crc kubenswrapper[4624]: E1008 16:33:59.131321 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6772dc75-57f4-4b68-9701-b5070212b07e" containerName="registry-server" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.131330 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6772dc75-57f4-4b68-9701-b5070212b07e" containerName="registry-server" Oct 08 16:33:59 crc kubenswrapper[4624]: E1008 16:33:59.131356 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6772dc75-57f4-4b68-9701-b5070212b07e" containerName="extract-content" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.131398 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6772dc75-57f4-4b68-9701-b5070212b07e" containerName="extract-content" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.131701 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="6772dc75-57f4-4b68-9701-b5070212b07e" containerName="registry-server" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.133591 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz79z" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.141740 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kz79z"] Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.290093 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0959a9f4-9f00-48bb-81d5-86bd5cb97a19-utilities\") pod \"certified-operators-kz79z\" (UID: \"0959a9f4-9f00-48bb-81d5-86bd5cb97a19\") " pod="openshift-marketplace/certified-operators-kz79z" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.290199 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7rgd\" (UniqueName: \"kubernetes.io/projected/0959a9f4-9f00-48bb-81d5-86bd5cb97a19-kube-api-access-r7rgd\") pod \"certified-operators-kz79z\" (UID: \"0959a9f4-9f00-48bb-81d5-86bd5cb97a19\") " pod="openshift-marketplace/certified-operators-kz79z" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.290419 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0959a9f4-9f00-48bb-81d5-86bd5cb97a19-catalog-content\") pod \"certified-operators-kz79z\" (UID: \"0959a9f4-9f00-48bb-81d5-86bd5cb97a19\") " pod="openshift-marketplace/certified-operators-kz79z" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.322910 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xtwc2"] Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.326870 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xtwc2" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.342076 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xtwc2"] Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.392660 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0959a9f4-9f00-48bb-81d5-86bd5cb97a19-utilities\") pod \"certified-operators-kz79z\" (UID: \"0959a9f4-9f00-48bb-81d5-86bd5cb97a19\") " pod="openshift-marketplace/certified-operators-kz79z" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.393110 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7rgd\" (UniqueName: \"kubernetes.io/projected/0959a9f4-9f00-48bb-81d5-86bd5cb97a19-kube-api-access-r7rgd\") pod \"certified-operators-kz79z\" (UID: \"0959a9f4-9f00-48bb-81d5-86bd5cb97a19\") " pod="openshift-marketplace/certified-operators-kz79z" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.393259 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0959a9f4-9f00-48bb-81d5-86bd5cb97a19-catalog-content\") pod \"certified-operators-kz79z\" (UID: \"0959a9f4-9f00-48bb-81d5-86bd5cb97a19\") " pod="openshift-marketplace/certified-operators-kz79z" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.393859 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0959a9f4-9f00-48bb-81d5-86bd5cb97a19-utilities\") pod \"certified-operators-kz79z\" (UID: \"0959a9f4-9f00-48bb-81d5-86bd5cb97a19\") " pod="openshift-marketplace/certified-operators-kz79z" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.394087 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0959a9f4-9f00-48bb-81d5-86bd5cb97a19-catalog-content\") pod \"certified-operators-kz79z\" (UID: \"0959a9f4-9f00-48bb-81d5-86bd5cb97a19\") " pod="openshift-marketplace/certified-operators-kz79z" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.418928 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7rgd\" (UniqueName: \"kubernetes.io/projected/0959a9f4-9f00-48bb-81d5-86bd5cb97a19-kube-api-access-r7rgd\") pod \"certified-operators-kz79z\" (UID: \"0959a9f4-9f00-48bb-81d5-86bd5cb97a19\") " pod="openshift-marketplace/certified-operators-kz79z" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.452368 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz79z" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.495134 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f059ac6c-9f9b-4b3a-b522-bf9896129870-catalog-content\") pod \"community-operators-xtwc2\" (UID: \"f059ac6c-9f9b-4b3a-b522-bf9896129870\") " pod="openshift-marketplace/community-operators-xtwc2" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.495451 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxrmp\" (UniqueName: \"kubernetes.io/projected/f059ac6c-9f9b-4b3a-b522-bf9896129870-kube-api-access-hxrmp\") pod \"community-operators-xtwc2\" (UID: \"f059ac6c-9f9b-4b3a-b522-bf9896129870\") " pod="openshift-marketplace/community-operators-xtwc2" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.495555 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f059ac6c-9f9b-4b3a-b522-bf9896129870-utilities\") pod \"community-operators-xtwc2\" (UID: \"f059ac6c-9f9b-4b3a-b522-bf9896129870\") " pod="openshift-marketplace/community-operators-xtwc2" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.597584 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f059ac6c-9f9b-4b3a-b522-bf9896129870-utilities\") pod \"community-operators-xtwc2\" (UID: \"f059ac6c-9f9b-4b3a-b522-bf9896129870\") " pod="openshift-marketplace/community-operators-xtwc2" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.597731 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f059ac6c-9f9b-4b3a-b522-bf9896129870-catalog-content\") pod \"community-operators-xtwc2\" (UID: \"f059ac6c-9f9b-4b3a-b522-bf9896129870\") " pod="openshift-marketplace/community-operators-xtwc2" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.597757 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxrmp\" (UniqueName: \"kubernetes.io/projected/f059ac6c-9f9b-4b3a-b522-bf9896129870-kube-api-access-hxrmp\") pod \"community-operators-xtwc2\" (UID: \"f059ac6c-9f9b-4b3a-b522-bf9896129870\") " pod="openshift-marketplace/community-operators-xtwc2" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.598793 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f059ac6c-9f9b-4b3a-b522-bf9896129870-utilities\") pod \"community-operators-xtwc2\" (UID: \"f059ac6c-9f9b-4b3a-b522-bf9896129870\") " pod="openshift-marketplace/community-operators-xtwc2" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.599079 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f059ac6c-9f9b-4b3a-b522-bf9896129870-catalog-content\") pod \"community-operators-xtwc2\" (UID: \"f059ac6c-9f9b-4b3a-b522-bf9896129870\") " pod="openshift-marketplace/community-operators-xtwc2" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.626300 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxrmp\" (UniqueName: \"kubernetes.io/projected/f059ac6c-9f9b-4b3a-b522-bf9896129870-kube-api-access-hxrmp\") pod \"community-operators-xtwc2\" (UID: \"f059ac6c-9f9b-4b3a-b522-bf9896129870\") " pod="openshift-marketplace/community-operators-xtwc2" Oct 08 16:33:59 crc kubenswrapper[4624]: I1008 16:33:59.648257 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xtwc2" Oct 08 16:34:00 crc kubenswrapper[4624]: I1008 16:34:00.326371 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kz79z"] Oct 08 16:34:00 crc kubenswrapper[4624]: I1008 16:34:00.517398 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xtwc2"] Oct 08 16:34:00 crc kubenswrapper[4624]: I1008 16:34:00.797801 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtwc2" event={"ID":"f059ac6c-9f9b-4b3a-b522-bf9896129870","Type":"ContainerStarted","Data":"2407cf3192128f9bca9fcc85384d7132845ce846bfac83e6e8c4808a45eabc68"} Oct 08 16:34:00 crc kubenswrapper[4624]: I1008 16:34:00.798916 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz79z" event={"ID":"0959a9f4-9f00-48bb-81d5-86bd5cb97a19","Type":"ContainerStarted","Data":"569dd763864b493939072a733a16b6fa4c3c9816cf8c6fb2495118366c760ed2"} Oct 08 16:34:01 crc kubenswrapper[4624]: I1008 16:34:01.811710 4624 generic.go:334] "Generic (PLEG): container finished" podID="0959a9f4-9f00-48bb-81d5-86bd5cb97a19" containerID="ad841fd83e9a5e90ae760dde9fc429dc44cf50524875bcc2594e66e2a07b11c6" exitCode=0 Oct 08 16:34:01 crc kubenswrapper[4624]: I1008 16:34:01.811844 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz79z" event={"ID":"0959a9f4-9f00-48bb-81d5-86bd5cb97a19","Type":"ContainerDied","Data":"ad841fd83e9a5e90ae760dde9fc429dc44cf50524875bcc2594e66e2a07b11c6"} Oct 08 16:34:01 crc kubenswrapper[4624]: I1008 16:34:01.814068 4624 generic.go:334] "Generic (PLEG): container finished" podID="f059ac6c-9f9b-4b3a-b522-bf9896129870" containerID="daa59a40941d15afa4479783ebe2fff837719e3b92330d19f49d5678232986ce" exitCode=0 Oct 08 16:34:01 crc kubenswrapper[4624]: I1008 16:34:01.814110 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtwc2" event={"ID":"f059ac6c-9f9b-4b3a-b522-bf9896129870","Type":"ContainerDied","Data":"daa59a40941d15afa4479783ebe2fff837719e3b92330d19f49d5678232986ce"} Oct 08 16:34:01 crc kubenswrapper[4624]: I1008 16:34:01.815432 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 16:34:03 crc kubenswrapper[4624]: I1008 16:34:03.835225 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtwc2" event={"ID":"f059ac6c-9f9b-4b3a-b522-bf9896129870","Type":"ContainerStarted","Data":"dd02ccaaed46e1b4f214490ca612b35af2d695c6b422fc403b31c449ef3ab8fc"} Oct 08 16:34:03 crc kubenswrapper[4624]: I1008 16:34:03.838268 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz79z" event={"ID":"0959a9f4-9f00-48bb-81d5-86bd5cb97a19","Type":"ContainerStarted","Data":"2b771688ba261e0e60a3eec342c37e13b537eeb37ed9887cf58e1b9e9ba3c2e2"} Oct 08 16:34:06 crc kubenswrapper[4624]: I1008 16:34:06.466506 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:34:06 crc kubenswrapper[4624]: E1008 16:34:06.467170 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:34:06 crc kubenswrapper[4624]: I1008 16:34:06.890850 4624 generic.go:334] "Generic (PLEG): container finished" podID="0959a9f4-9f00-48bb-81d5-86bd5cb97a19" containerID="2b771688ba261e0e60a3eec342c37e13b537eeb37ed9887cf58e1b9e9ba3c2e2" exitCode=0 Oct 08 16:34:06 crc kubenswrapper[4624]: I1008 16:34:06.890914 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz79z" event={"ID":"0959a9f4-9f00-48bb-81d5-86bd5cb97a19","Type":"ContainerDied","Data":"2b771688ba261e0e60a3eec342c37e13b537eeb37ed9887cf58e1b9e9ba3c2e2"} Oct 08 16:34:06 crc kubenswrapper[4624]: I1008 16:34:06.894716 4624 generic.go:334] "Generic (PLEG): container finished" podID="f059ac6c-9f9b-4b3a-b522-bf9896129870" containerID="dd02ccaaed46e1b4f214490ca612b35af2d695c6b422fc403b31c449ef3ab8fc" exitCode=0 Oct 08 16:34:06 crc kubenswrapper[4624]: I1008 16:34:06.894944 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtwc2" event={"ID":"f059ac6c-9f9b-4b3a-b522-bf9896129870","Type":"ContainerDied","Data":"dd02ccaaed46e1b4f214490ca612b35af2d695c6b422fc403b31c449ef3ab8fc"} Oct 08 16:34:07 crc kubenswrapper[4624]: I1008 16:34:07.910315 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtwc2" event={"ID":"f059ac6c-9f9b-4b3a-b522-bf9896129870","Type":"ContainerStarted","Data":"37cd8e7872f1b3f01a03e2fd844121cf76bff3ab95b76331ccbe25b01c2888ab"} Oct 08 16:34:07 crc kubenswrapper[4624]: I1008 16:34:07.913219 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz79z" event={"ID":"0959a9f4-9f00-48bb-81d5-86bd5cb97a19","Type":"ContainerStarted","Data":"be1b765f7b6d31364601b013f00f698a071ff31ea2c8dbff7517e8488d05c4bc"} Oct 08 16:34:07 crc kubenswrapper[4624]: I1008 16:34:07.938740 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xtwc2" podStartSLOduration=3.37680152 podStartE2EDuration="8.938716525s" podCreationTimestamp="2025-10-08 16:33:59 +0000 UTC" firstStartedPulling="2025-10-08 16:34:01.816257557 +0000 UTC m=+7866.967192644" lastFinishedPulling="2025-10-08 16:34:07.378172572 +0000 UTC m=+7872.529107649" observedRunningTime="2025-10-08 16:34:07.93813073 +0000 UTC m=+7873.089065807" watchObservedRunningTime="2025-10-08 16:34:07.938716525 +0000 UTC m=+7873.089651602" Oct 08 16:34:07 crc kubenswrapper[4624]: I1008 16:34:07.965545 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kz79z" podStartSLOduration=3.478964422 podStartE2EDuration="8.96552051s" podCreationTimestamp="2025-10-08 16:33:59 +0000 UTC" firstStartedPulling="2025-10-08 16:34:01.81519223 +0000 UTC m=+7866.966127307" lastFinishedPulling="2025-10-08 16:34:07.301748318 +0000 UTC m=+7872.452683395" observedRunningTime="2025-10-08 16:34:07.959313541 +0000 UTC m=+7873.110248618" watchObservedRunningTime="2025-10-08 16:34:07.96552051 +0000 UTC m=+7873.116455587" Oct 08 16:34:09 crc kubenswrapper[4624]: I1008 16:34:09.452649 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kz79z" Oct 08 16:34:09 crc kubenswrapper[4624]: I1008 16:34:09.453051 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kz79z" Oct 08 16:34:09 crc kubenswrapper[4624]: I1008 16:34:09.649484 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xtwc2" Oct 08 16:34:09 crc kubenswrapper[4624]: I1008 16:34:09.650491 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xtwc2" Oct 08 16:34:10 crc kubenswrapper[4624]: I1008 16:34:10.505315 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kz79z" podUID="0959a9f4-9f00-48bb-81d5-86bd5cb97a19" containerName="registry-server" probeResult="failure" output=< Oct 08 16:34:10 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:34:10 crc kubenswrapper[4624]: > Oct 08 16:34:10 crc kubenswrapper[4624]: I1008 16:34:10.699989 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xtwc2" podUID="f059ac6c-9f9b-4b3a-b522-bf9896129870" containerName="registry-server" probeResult="failure" output=< Oct 08 16:34:10 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:34:10 crc kubenswrapper[4624]: > Oct 08 16:34:19 crc kubenswrapper[4624]: I1008 16:34:19.700433 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xtwc2" Oct 08 16:34:19 crc kubenswrapper[4624]: I1008 16:34:19.758528 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xtwc2" Oct 08 16:34:19 crc kubenswrapper[4624]: I1008 16:34:19.942080 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xtwc2"] Oct 08 16:34:20 crc kubenswrapper[4624]: I1008 16:34:20.512761 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kz79z" podUID="0959a9f4-9f00-48bb-81d5-86bd5cb97a19" containerName="registry-server" probeResult="failure" output=< Oct 08 16:34:20 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:34:20 crc kubenswrapper[4624]: > Oct 08 16:34:21 crc kubenswrapper[4624]: I1008 16:34:21.044049 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xtwc2" podUID="f059ac6c-9f9b-4b3a-b522-bf9896129870" containerName="registry-server" containerID="cri-o://37cd8e7872f1b3f01a03e2fd844121cf76bff3ab95b76331ccbe25b01c2888ab" gracePeriod=2 Oct 08 16:34:21 crc kubenswrapper[4624]: I1008 16:34:21.465768 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:34:21 crc kubenswrapper[4624]: E1008 16:34:21.466461 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:34:21 crc kubenswrapper[4624]: I1008 16:34:21.782312 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xtwc2" Oct 08 16:34:21 crc kubenswrapper[4624]: I1008 16:34:21.910794 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f059ac6c-9f9b-4b3a-b522-bf9896129870-catalog-content\") pod \"f059ac6c-9f9b-4b3a-b522-bf9896129870\" (UID: \"f059ac6c-9f9b-4b3a-b522-bf9896129870\") " Oct 08 16:34:21 crc kubenswrapper[4624]: I1008 16:34:21.910852 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f059ac6c-9f9b-4b3a-b522-bf9896129870-utilities\") pod \"f059ac6c-9f9b-4b3a-b522-bf9896129870\" (UID: \"f059ac6c-9f9b-4b3a-b522-bf9896129870\") " Oct 08 16:34:21 crc kubenswrapper[4624]: I1008 16:34:21.910988 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxrmp\" (UniqueName: \"kubernetes.io/projected/f059ac6c-9f9b-4b3a-b522-bf9896129870-kube-api-access-hxrmp\") pod \"f059ac6c-9f9b-4b3a-b522-bf9896129870\" (UID: \"f059ac6c-9f9b-4b3a-b522-bf9896129870\") " Oct 08 16:34:21 crc kubenswrapper[4624]: I1008 16:34:21.912078 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f059ac6c-9f9b-4b3a-b522-bf9896129870-utilities" (OuterVolumeSpecName: "utilities") pod "f059ac6c-9f9b-4b3a-b522-bf9896129870" (UID: "f059ac6c-9f9b-4b3a-b522-bf9896129870"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:34:21 crc kubenswrapper[4624]: I1008 16:34:21.932892 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f059ac6c-9f9b-4b3a-b522-bf9896129870-kube-api-access-hxrmp" (OuterVolumeSpecName: "kube-api-access-hxrmp") pod "f059ac6c-9f9b-4b3a-b522-bf9896129870" (UID: "f059ac6c-9f9b-4b3a-b522-bf9896129870"). InnerVolumeSpecName "kube-api-access-hxrmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:34:21 crc kubenswrapper[4624]: I1008 16:34:21.966541 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f059ac6c-9f9b-4b3a-b522-bf9896129870-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f059ac6c-9f9b-4b3a-b522-bf9896129870" (UID: "f059ac6c-9f9b-4b3a-b522-bf9896129870"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:34:22 crc kubenswrapper[4624]: I1008 16:34:22.014115 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxrmp\" (UniqueName: \"kubernetes.io/projected/f059ac6c-9f9b-4b3a-b522-bf9896129870-kube-api-access-hxrmp\") on node \"crc\" DevicePath \"\"" Oct 08 16:34:22 crc kubenswrapper[4624]: I1008 16:34:22.014412 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f059ac6c-9f9b-4b3a-b522-bf9896129870-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:34:22 crc kubenswrapper[4624]: I1008 16:34:22.014515 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f059ac6c-9f9b-4b3a-b522-bf9896129870-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:34:22 crc kubenswrapper[4624]: I1008 16:34:22.057037 4624 generic.go:334] "Generic (PLEG): container finished" podID="f059ac6c-9f9b-4b3a-b522-bf9896129870" containerID="37cd8e7872f1b3f01a03e2fd844121cf76bff3ab95b76331ccbe25b01c2888ab" exitCode=0 Oct 08 16:34:22 crc kubenswrapper[4624]: I1008 16:34:22.057253 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtwc2" event={"ID":"f059ac6c-9f9b-4b3a-b522-bf9896129870","Type":"ContainerDied","Data":"37cd8e7872f1b3f01a03e2fd844121cf76bff3ab95b76331ccbe25b01c2888ab"} Oct 08 16:34:22 crc kubenswrapper[4624]: I1008 16:34:22.057814 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtwc2" event={"ID":"f059ac6c-9f9b-4b3a-b522-bf9896129870","Type":"ContainerDied","Data":"2407cf3192128f9bca9fcc85384d7132845ce846bfac83e6e8c4808a45eabc68"} Oct 08 16:34:22 crc kubenswrapper[4624]: I1008 16:34:22.057850 4624 scope.go:117] "RemoveContainer" containerID="37cd8e7872f1b3f01a03e2fd844121cf76bff3ab95b76331ccbe25b01c2888ab" Oct 08 16:34:22 crc kubenswrapper[4624]: I1008 16:34:22.057366 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xtwc2" Oct 08 16:34:22 crc kubenswrapper[4624]: I1008 16:34:22.084159 4624 scope.go:117] "RemoveContainer" containerID="dd02ccaaed46e1b4f214490ca612b35af2d695c6b422fc403b31c449ef3ab8fc" Oct 08 16:34:22 crc kubenswrapper[4624]: I1008 16:34:22.110038 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xtwc2"] Oct 08 16:34:22 crc kubenswrapper[4624]: I1008 16:34:22.117510 4624 scope.go:117] "RemoveContainer" containerID="daa59a40941d15afa4479783ebe2fff837719e3b92330d19f49d5678232986ce" Oct 08 16:34:22 crc kubenswrapper[4624]: I1008 16:34:22.127502 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xtwc2"] Oct 08 16:34:22 crc kubenswrapper[4624]: I1008 16:34:22.172758 4624 scope.go:117] "RemoveContainer" containerID="37cd8e7872f1b3f01a03e2fd844121cf76bff3ab95b76331ccbe25b01c2888ab" Oct 08 16:34:22 crc kubenswrapper[4624]: E1008 16:34:22.173376 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37cd8e7872f1b3f01a03e2fd844121cf76bff3ab95b76331ccbe25b01c2888ab\": container with ID starting with 37cd8e7872f1b3f01a03e2fd844121cf76bff3ab95b76331ccbe25b01c2888ab not found: ID does not exist" containerID="37cd8e7872f1b3f01a03e2fd844121cf76bff3ab95b76331ccbe25b01c2888ab" Oct 08 16:34:22 crc kubenswrapper[4624]: I1008 16:34:22.173415 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37cd8e7872f1b3f01a03e2fd844121cf76bff3ab95b76331ccbe25b01c2888ab"} err="failed to get container status \"37cd8e7872f1b3f01a03e2fd844121cf76bff3ab95b76331ccbe25b01c2888ab\": rpc error: code = NotFound desc = could not find container \"37cd8e7872f1b3f01a03e2fd844121cf76bff3ab95b76331ccbe25b01c2888ab\": container with ID starting with 37cd8e7872f1b3f01a03e2fd844121cf76bff3ab95b76331ccbe25b01c2888ab not found: ID does not exist" Oct 08 16:34:22 crc kubenswrapper[4624]: I1008 16:34:22.173463 4624 scope.go:117] "RemoveContainer" containerID="dd02ccaaed46e1b4f214490ca612b35af2d695c6b422fc403b31c449ef3ab8fc" Oct 08 16:34:22 crc kubenswrapper[4624]: E1008 16:34:22.174105 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd02ccaaed46e1b4f214490ca612b35af2d695c6b422fc403b31c449ef3ab8fc\": container with ID starting with dd02ccaaed46e1b4f214490ca612b35af2d695c6b422fc403b31c449ef3ab8fc not found: ID does not exist" containerID="dd02ccaaed46e1b4f214490ca612b35af2d695c6b422fc403b31c449ef3ab8fc" Oct 08 16:34:22 crc kubenswrapper[4624]: I1008 16:34:22.174161 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd02ccaaed46e1b4f214490ca612b35af2d695c6b422fc403b31c449ef3ab8fc"} err="failed to get container status \"dd02ccaaed46e1b4f214490ca612b35af2d695c6b422fc403b31c449ef3ab8fc\": rpc error: code = NotFound desc = could not find container \"dd02ccaaed46e1b4f214490ca612b35af2d695c6b422fc403b31c449ef3ab8fc\": container with ID starting with dd02ccaaed46e1b4f214490ca612b35af2d695c6b422fc403b31c449ef3ab8fc not found: ID does not exist" Oct 08 16:34:22 crc kubenswrapper[4624]: I1008 16:34:22.174198 4624 scope.go:117] "RemoveContainer" containerID="daa59a40941d15afa4479783ebe2fff837719e3b92330d19f49d5678232986ce" Oct 08 16:34:22 crc kubenswrapper[4624]: E1008 16:34:22.174695 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daa59a40941d15afa4479783ebe2fff837719e3b92330d19f49d5678232986ce\": container with ID starting with daa59a40941d15afa4479783ebe2fff837719e3b92330d19f49d5678232986ce not found: ID does not exist" containerID="daa59a40941d15afa4479783ebe2fff837719e3b92330d19f49d5678232986ce" Oct 08 16:34:22 crc kubenswrapper[4624]: I1008 16:34:22.174728 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daa59a40941d15afa4479783ebe2fff837719e3b92330d19f49d5678232986ce"} err="failed to get container status \"daa59a40941d15afa4479783ebe2fff837719e3b92330d19f49d5678232986ce\": rpc error: code = NotFound desc = could not find container \"daa59a40941d15afa4479783ebe2fff837719e3b92330d19f49d5678232986ce\": container with ID starting with daa59a40941d15afa4479783ebe2fff837719e3b92330d19f49d5678232986ce not found: ID does not exist" Oct 08 16:34:23 crc kubenswrapper[4624]: I1008 16:34:23.478425 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f059ac6c-9f9b-4b3a-b522-bf9896129870" path="/var/lib/kubelet/pods/f059ac6c-9f9b-4b3a-b522-bf9896129870/volumes" Oct 08 16:34:29 crc kubenswrapper[4624]: I1008 16:34:29.525106 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kz79z" Oct 08 16:34:29 crc kubenswrapper[4624]: I1008 16:34:29.575460 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kz79z" Oct 08 16:34:30 crc kubenswrapper[4624]: I1008 16:34:30.327340 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kz79z"] Oct 08 16:34:31 crc kubenswrapper[4624]: I1008 16:34:31.140456 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kz79z" podUID="0959a9f4-9f00-48bb-81d5-86bd5cb97a19" containerName="registry-server" containerID="cri-o://be1b765f7b6d31364601b013f00f698a071ff31ea2c8dbff7517e8488d05c4bc" gracePeriod=2 Oct 08 16:34:31 crc kubenswrapper[4624]: I1008 16:34:31.646479 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz79z" Oct 08 16:34:31 crc kubenswrapper[4624]: I1008 16:34:31.745229 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7rgd\" (UniqueName: \"kubernetes.io/projected/0959a9f4-9f00-48bb-81d5-86bd5cb97a19-kube-api-access-r7rgd\") pod \"0959a9f4-9f00-48bb-81d5-86bd5cb97a19\" (UID: \"0959a9f4-9f00-48bb-81d5-86bd5cb97a19\") " Oct 08 16:34:31 crc kubenswrapper[4624]: I1008 16:34:31.745547 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0959a9f4-9f00-48bb-81d5-86bd5cb97a19-utilities\") pod \"0959a9f4-9f00-48bb-81d5-86bd5cb97a19\" (UID: \"0959a9f4-9f00-48bb-81d5-86bd5cb97a19\") " Oct 08 16:34:31 crc kubenswrapper[4624]: I1008 16:34:31.745858 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0959a9f4-9f00-48bb-81d5-86bd5cb97a19-catalog-content\") pod \"0959a9f4-9f00-48bb-81d5-86bd5cb97a19\" (UID: \"0959a9f4-9f00-48bb-81d5-86bd5cb97a19\") " Oct 08 16:34:31 crc kubenswrapper[4624]: I1008 16:34:31.746503 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0959a9f4-9f00-48bb-81d5-86bd5cb97a19-utilities" (OuterVolumeSpecName: "utilities") pod "0959a9f4-9f00-48bb-81d5-86bd5cb97a19" (UID: "0959a9f4-9f00-48bb-81d5-86bd5cb97a19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:34:31 crc kubenswrapper[4624]: I1008 16:34:31.755791 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0959a9f4-9f00-48bb-81d5-86bd5cb97a19-kube-api-access-r7rgd" (OuterVolumeSpecName: "kube-api-access-r7rgd") pod "0959a9f4-9f00-48bb-81d5-86bd5cb97a19" (UID: "0959a9f4-9f00-48bb-81d5-86bd5cb97a19"). InnerVolumeSpecName "kube-api-access-r7rgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:34:31 crc kubenswrapper[4624]: I1008 16:34:31.824510 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0959a9f4-9f00-48bb-81d5-86bd5cb97a19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0959a9f4-9f00-48bb-81d5-86bd5cb97a19" (UID: "0959a9f4-9f00-48bb-81d5-86bd5cb97a19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:34:31 crc kubenswrapper[4624]: I1008 16:34:31.848515 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0959a9f4-9f00-48bb-81d5-86bd5cb97a19-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:34:31 crc kubenswrapper[4624]: I1008 16:34:31.848578 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7rgd\" (UniqueName: \"kubernetes.io/projected/0959a9f4-9f00-48bb-81d5-86bd5cb97a19-kube-api-access-r7rgd\") on node \"crc\" DevicePath \"\"" Oct 08 16:34:31 crc kubenswrapper[4624]: I1008 16:34:31.848595 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0959a9f4-9f00-48bb-81d5-86bd5cb97a19-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:34:32 crc kubenswrapper[4624]: I1008 16:34:32.151568 4624 generic.go:334] "Generic (PLEG): container finished" podID="0959a9f4-9f00-48bb-81d5-86bd5cb97a19" containerID="be1b765f7b6d31364601b013f00f698a071ff31ea2c8dbff7517e8488d05c4bc" exitCode=0 Oct 08 16:34:32 crc kubenswrapper[4624]: I1008 16:34:32.151622 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz79z" event={"ID":"0959a9f4-9f00-48bb-81d5-86bd5cb97a19","Type":"ContainerDied","Data":"be1b765f7b6d31364601b013f00f698a071ff31ea2c8dbff7517e8488d05c4bc"} Oct 08 16:34:32 crc kubenswrapper[4624]: I1008 16:34:32.151656 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz79z" Oct 08 16:34:32 crc kubenswrapper[4624]: I1008 16:34:32.151678 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz79z" event={"ID":"0959a9f4-9f00-48bb-81d5-86bd5cb97a19","Type":"ContainerDied","Data":"569dd763864b493939072a733a16b6fa4c3c9816cf8c6fb2495118366c760ed2"} Oct 08 16:34:32 crc kubenswrapper[4624]: I1008 16:34:32.151699 4624 scope.go:117] "RemoveContainer" containerID="be1b765f7b6d31364601b013f00f698a071ff31ea2c8dbff7517e8488d05c4bc" Oct 08 16:34:32 crc kubenswrapper[4624]: I1008 16:34:32.183315 4624 scope.go:117] "RemoveContainer" containerID="2b771688ba261e0e60a3eec342c37e13b537eeb37ed9887cf58e1b9e9ba3c2e2" Oct 08 16:34:32 crc kubenswrapper[4624]: I1008 16:34:32.196005 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kz79z"] Oct 08 16:34:32 crc kubenswrapper[4624]: I1008 16:34:32.203270 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kz79z"] Oct 08 16:34:32 crc kubenswrapper[4624]: I1008 16:34:32.211241 4624 scope.go:117] "RemoveContainer" containerID="ad841fd83e9a5e90ae760dde9fc429dc44cf50524875bcc2594e66e2a07b11c6" Oct 08 16:34:32 crc kubenswrapper[4624]: I1008 16:34:32.267775 4624 scope.go:117] "RemoveContainer" containerID="be1b765f7b6d31364601b013f00f698a071ff31ea2c8dbff7517e8488d05c4bc" Oct 08 16:34:32 crc kubenswrapper[4624]: E1008 16:34:32.268312 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be1b765f7b6d31364601b013f00f698a071ff31ea2c8dbff7517e8488d05c4bc\": container with ID starting with be1b765f7b6d31364601b013f00f698a071ff31ea2c8dbff7517e8488d05c4bc not found: ID does not exist" containerID="be1b765f7b6d31364601b013f00f698a071ff31ea2c8dbff7517e8488d05c4bc" Oct 08 16:34:32 crc kubenswrapper[4624]: I1008 16:34:32.268355 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be1b765f7b6d31364601b013f00f698a071ff31ea2c8dbff7517e8488d05c4bc"} err="failed to get container status \"be1b765f7b6d31364601b013f00f698a071ff31ea2c8dbff7517e8488d05c4bc\": rpc error: code = NotFound desc = could not find container \"be1b765f7b6d31364601b013f00f698a071ff31ea2c8dbff7517e8488d05c4bc\": container with ID starting with be1b765f7b6d31364601b013f00f698a071ff31ea2c8dbff7517e8488d05c4bc not found: ID does not exist" Oct 08 16:34:32 crc kubenswrapper[4624]: I1008 16:34:32.268382 4624 scope.go:117] "RemoveContainer" containerID="2b771688ba261e0e60a3eec342c37e13b537eeb37ed9887cf58e1b9e9ba3c2e2" Oct 08 16:34:32 crc kubenswrapper[4624]: E1008 16:34:32.268775 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b771688ba261e0e60a3eec342c37e13b537eeb37ed9887cf58e1b9e9ba3c2e2\": container with ID starting with 2b771688ba261e0e60a3eec342c37e13b537eeb37ed9887cf58e1b9e9ba3c2e2 not found: ID does not exist" containerID="2b771688ba261e0e60a3eec342c37e13b537eeb37ed9887cf58e1b9e9ba3c2e2" Oct 08 16:34:32 crc kubenswrapper[4624]: I1008 16:34:32.268806 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b771688ba261e0e60a3eec342c37e13b537eeb37ed9887cf58e1b9e9ba3c2e2"} err="failed to get container status \"2b771688ba261e0e60a3eec342c37e13b537eeb37ed9887cf58e1b9e9ba3c2e2\": rpc error: code = NotFound desc = could not find container \"2b771688ba261e0e60a3eec342c37e13b537eeb37ed9887cf58e1b9e9ba3c2e2\": container with ID starting with 2b771688ba261e0e60a3eec342c37e13b537eeb37ed9887cf58e1b9e9ba3c2e2 not found: ID does not exist" Oct 08 16:34:32 crc kubenswrapper[4624]: I1008 16:34:32.268881 4624 scope.go:117] "RemoveContainer" containerID="ad841fd83e9a5e90ae760dde9fc429dc44cf50524875bcc2594e66e2a07b11c6" Oct 08 16:34:32 crc kubenswrapper[4624]: E1008 16:34:32.269262 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad841fd83e9a5e90ae760dde9fc429dc44cf50524875bcc2594e66e2a07b11c6\": container with ID starting with ad841fd83e9a5e90ae760dde9fc429dc44cf50524875bcc2594e66e2a07b11c6 not found: ID does not exist" containerID="ad841fd83e9a5e90ae760dde9fc429dc44cf50524875bcc2594e66e2a07b11c6" Oct 08 16:34:32 crc kubenswrapper[4624]: I1008 16:34:32.269289 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad841fd83e9a5e90ae760dde9fc429dc44cf50524875bcc2594e66e2a07b11c6"} err="failed to get container status \"ad841fd83e9a5e90ae760dde9fc429dc44cf50524875bcc2594e66e2a07b11c6\": rpc error: code = NotFound desc = could not find container \"ad841fd83e9a5e90ae760dde9fc429dc44cf50524875bcc2594e66e2a07b11c6\": container with ID starting with ad841fd83e9a5e90ae760dde9fc429dc44cf50524875bcc2594e66e2a07b11c6 not found: ID does not exist" Oct 08 16:34:33 crc kubenswrapper[4624]: I1008 16:34:33.478932 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0959a9f4-9f00-48bb-81d5-86bd5cb97a19" path="/var/lib/kubelet/pods/0959a9f4-9f00-48bb-81d5-86bd5cb97a19/volumes" Oct 08 16:34:34 crc kubenswrapper[4624]: I1008 16:34:34.467101 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:34:34 crc kubenswrapper[4624]: E1008 16:34:34.467517 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:34:46 crc kubenswrapper[4624]: I1008 16:34:46.563394 4624 scope.go:117] "RemoveContainer" containerID="0e3170800f55ef139ecd6f5c00c8b7d184c735606c3f0a7795694aa07c1efb04" Oct 08 16:34:47 crc kubenswrapper[4624]: I1008 16:34:47.466379 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:34:47 crc kubenswrapper[4624]: E1008 16:34:47.467304 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:34:58 crc kubenswrapper[4624]: I1008 16:34:58.466143 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:34:58 crc kubenswrapper[4624]: E1008 16:34:58.467008 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:35:13 crc kubenswrapper[4624]: I1008 16:35:13.466576 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:35:13 crc kubenswrapper[4624]: E1008 16:35:13.467859 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:35:28 crc kubenswrapper[4624]: I1008 16:35:28.465650 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:35:28 crc kubenswrapper[4624]: E1008 16:35:28.466416 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:35:41 crc kubenswrapper[4624]: I1008 16:35:41.466494 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:35:41 crc kubenswrapper[4624]: E1008 16:35:41.467348 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:35:55 crc kubenswrapper[4624]: I1008 16:35:55.477212 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:35:55 crc kubenswrapper[4624]: E1008 16:35:55.478049 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:36:09 crc kubenswrapper[4624]: I1008 16:36:09.467727 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:36:09 crc kubenswrapper[4624]: E1008 16:36:09.468588 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:36:23 crc kubenswrapper[4624]: I1008 16:36:23.468781 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:36:23 crc kubenswrapper[4624]: E1008 16:36:23.470058 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:36:37 crc kubenswrapper[4624]: I1008 16:36:37.466364 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:36:37 crc kubenswrapper[4624]: E1008 16:36:37.467068 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:36:38 crc kubenswrapper[4624]: E1008 16:36:38.190318 4624 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.154:59934->38.102.83.154:39627: write tcp 38.102.83.154:59934->38.102.83.154:39627: write: connection reset by peer Oct 08 16:36:52 crc kubenswrapper[4624]: I1008 16:36:52.466246 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:36:52 crc kubenswrapper[4624]: E1008 16:36:52.468440 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:37:05 crc kubenswrapper[4624]: I1008 16:37:05.466425 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:37:05 crc kubenswrapper[4624]: E1008 16:37:05.472498 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:37:17 crc kubenswrapper[4624]: I1008 16:37:17.466098 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:37:17 crc kubenswrapper[4624]: E1008 16:37:17.467051 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:37:28 crc kubenswrapper[4624]: I1008 16:37:28.466210 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:37:28 crc kubenswrapper[4624]: E1008 16:37:28.467031 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:37:40 crc kubenswrapper[4624]: I1008 16:37:40.466229 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:37:40 crc kubenswrapper[4624]: E1008 16:37:40.467065 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:37:53 crc kubenswrapper[4624]: I1008 16:37:53.466060 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:37:53 crc kubenswrapper[4624]: E1008 16:37:53.466780 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.660429 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qqx22"] Oct 08 16:37:56 crc kubenswrapper[4624]: E1008 16:37:56.661479 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0959a9f4-9f00-48bb-81d5-86bd5cb97a19" containerName="registry-server" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.661500 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0959a9f4-9f00-48bb-81d5-86bd5cb97a19" containerName="registry-server" Oct 08 16:37:56 crc kubenswrapper[4624]: E1008 16:37:56.661539 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f059ac6c-9f9b-4b3a-b522-bf9896129870" containerName="extract-content" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.661547 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f059ac6c-9f9b-4b3a-b522-bf9896129870" containerName="extract-content" Oct 08 16:37:56 crc kubenswrapper[4624]: E1008 16:37:56.661558 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f059ac6c-9f9b-4b3a-b522-bf9896129870" containerName="registry-server" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.661566 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f059ac6c-9f9b-4b3a-b522-bf9896129870" containerName="registry-server" Oct 08 16:37:56 crc kubenswrapper[4624]: E1008 16:37:56.661588 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0959a9f4-9f00-48bb-81d5-86bd5cb97a19" containerName="extract-utilities" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.661597 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0959a9f4-9f00-48bb-81d5-86bd5cb97a19" containerName="extract-utilities" Oct 08 16:37:56 crc kubenswrapper[4624]: E1008 16:37:56.661619 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0959a9f4-9f00-48bb-81d5-86bd5cb97a19" containerName="extract-content" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.661627 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0959a9f4-9f00-48bb-81d5-86bd5cb97a19" containerName="extract-content" Oct 08 16:37:56 crc kubenswrapper[4624]: E1008 16:37:56.661659 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f059ac6c-9f9b-4b3a-b522-bf9896129870" containerName="extract-utilities" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.661667 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f059ac6c-9f9b-4b3a-b522-bf9896129870" containerName="extract-utilities" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.661965 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="0959a9f4-9f00-48bb-81d5-86bd5cb97a19" containerName="registry-server" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.662008 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f059ac6c-9f9b-4b3a-b522-bf9896129870" containerName="registry-server" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.664203 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqx22" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.686541 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qqx22"] Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.784463 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050003e1-f549-4462-bda0-835ada699169-catalog-content\") pod \"redhat-operators-qqx22\" (UID: \"050003e1-f549-4462-bda0-835ada699169\") " pod="openshift-marketplace/redhat-operators-qqx22" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.784602 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p4cv\" (UniqueName: \"kubernetes.io/projected/050003e1-f549-4462-bda0-835ada699169-kube-api-access-9p4cv\") pod \"redhat-operators-qqx22\" (UID: \"050003e1-f549-4462-bda0-835ada699169\") " pod="openshift-marketplace/redhat-operators-qqx22" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.784769 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050003e1-f549-4462-bda0-835ada699169-utilities\") pod \"redhat-operators-qqx22\" (UID: \"050003e1-f549-4462-bda0-835ada699169\") " pod="openshift-marketplace/redhat-operators-qqx22" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.887157 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050003e1-f549-4462-bda0-835ada699169-catalog-content\") pod \"redhat-operators-qqx22\" (UID: \"050003e1-f549-4462-bda0-835ada699169\") " pod="openshift-marketplace/redhat-operators-qqx22" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.887309 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p4cv\" (UniqueName: \"kubernetes.io/projected/050003e1-f549-4462-bda0-835ada699169-kube-api-access-9p4cv\") pod \"redhat-operators-qqx22\" (UID: \"050003e1-f549-4462-bda0-835ada699169\") " pod="openshift-marketplace/redhat-operators-qqx22" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.887371 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050003e1-f549-4462-bda0-835ada699169-utilities\") pod \"redhat-operators-qqx22\" (UID: \"050003e1-f549-4462-bda0-835ada699169\") " pod="openshift-marketplace/redhat-operators-qqx22" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.887977 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050003e1-f549-4462-bda0-835ada699169-utilities\") pod \"redhat-operators-qqx22\" (UID: \"050003e1-f549-4462-bda0-835ada699169\") " pod="openshift-marketplace/redhat-operators-qqx22" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.887993 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050003e1-f549-4462-bda0-835ada699169-catalog-content\") pod \"redhat-operators-qqx22\" (UID: \"050003e1-f549-4462-bda0-835ada699169\") " pod="openshift-marketplace/redhat-operators-qqx22" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.910713 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p4cv\" (UniqueName: \"kubernetes.io/projected/050003e1-f549-4462-bda0-835ada699169-kube-api-access-9p4cv\") pod \"redhat-operators-qqx22\" (UID: \"050003e1-f549-4462-bda0-835ada699169\") " pod="openshift-marketplace/redhat-operators-qqx22" Oct 08 16:37:56 crc kubenswrapper[4624]: I1008 16:37:56.988374 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqx22" Oct 08 16:37:57 crc kubenswrapper[4624]: I1008 16:37:57.615806 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qqx22"] Oct 08 16:37:58 crc kubenswrapper[4624]: I1008 16:37:58.173416 4624 generic.go:334] "Generic (PLEG): container finished" podID="050003e1-f549-4462-bda0-835ada699169" containerID="14a61364fb2ba48c9688122bc8d498696f09dc79c613ef60967bdb7851628a45" exitCode=0 Oct 08 16:37:58 crc kubenswrapper[4624]: I1008 16:37:58.173539 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqx22" event={"ID":"050003e1-f549-4462-bda0-835ada699169","Type":"ContainerDied","Data":"14a61364fb2ba48c9688122bc8d498696f09dc79c613ef60967bdb7851628a45"} Oct 08 16:37:58 crc kubenswrapper[4624]: I1008 16:37:58.174835 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqx22" event={"ID":"050003e1-f549-4462-bda0-835ada699169","Type":"ContainerStarted","Data":"38931b9d90fafdefbe06e3201a7876bc946722e88eb925b691dd0dccd6691b1d"} Oct 08 16:38:00 crc kubenswrapper[4624]: I1008 16:38:00.198739 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqx22" event={"ID":"050003e1-f549-4462-bda0-835ada699169","Type":"ContainerStarted","Data":"4f0edae303d4f3acffbe9f3c6a0990c4b66cac3e7a015931df89df40dd48e9cb"} Oct 08 16:38:04 crc kubenswrapper[4624]: I1008 16:38:04.238279 4624 generic.go:334] "Generic (PLEG): container finished" podID="050003e1-f549-4462-bda0-835ada699169" containerID="4f0edae303d4f3acffbe9f3c6a0990c4b66cac3e7a015931df89df40dd48e9cb" exitCode=0 Oct 08 16:38:04 crc kubenswrapper[4624]: I1008 16:38:04.238526 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqx22" event={"ID":"050003e1-f549-4462-bda0-835ada699169","Type":"ContainerDied","Data":"4f0edae303d4f3acffbe9f3c6a0990c4b66cac3e7a015931df89df40dd48e9cb"} Oct 08 16:38:05 crc kubenswrapper[4624]: I1008 16:38:05.251701 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqx22" event={"ID":"050003e1-f549-4462-bda0-835ada699169","Type":"ContainerStarted","Data":"000a8199f836fa0752053f98054bccfeffe76f0a66b8ce811cac0da89113e3d9"} Oct 08 16:38:05 crc kubenswrapper[4624]: I1008 16:38:05.277960 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qqx22" podStartSLOduration=2.758206628 podStartE2EDuration="9.277936059s" podCreationTimestamp="2025-10-08 16:37:56 +0000 UTC" firstStartedPulling="2025-10-08 16:37:58.175973676 +0000 UTC m=+8103.326908753" lastFinishedPulling="2025-10-08 16:38:04.695703107 +0000 UTC m=+8109.846638184" observedRunningTime="2025-10-08 16:38:05.273874276 +0000 UTC m=+8110.424809353" watchObservedRunningTime="2025-10-08 16:38:05.277936059 +0000 UTC m=+8110.428871126" Oct 08 16:38:06 crc kubenswrapper[4624]: I1008 16:38:06.988522 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qqx22" Oct 08 16:38:06 crc kubenswrapper[4624]: I1008 16:38:06.988880 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qqx22" Oct 08 16:38:07 crc kubenswrapper[4624]: I1008 16:38:07.467032 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:38:08 crc kubenswrapper[4624]: I1008 16:38:08.046206 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qqx22" podUID="050003e1-f549-4462-bda0-835ada699169" containerName="registry-server" probeResult="failure" output=< Oct 08 16:38:08 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:38:08 crc kubenswrapper[4624]: > Oct 08 16:38:08 crc kubenswrapper[4624]: I1008 16:38:08.291137 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"60127c07f3261acbce8eb1c000b2714eb51105b1ff1e82dccc0745d7d70c12b3"} Oct 08 16:38:18 crc kubenswrapper[4624]: I1008 16:38:18.042574 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qqx22" podUID="050003e1-f549-4462-bda0-835ada699169" containerName="registry-server" probeResult="failure" output=< Oct 08 16:38:18 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:38:18 crc kubenswrapper[4624]: > Oct 08 16:38:28 crc kubenswrapper[4624]: I1008 16:38:28.043516 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qqx22" podUID="050003e1-f549-4462-bda0-835ada699169" containerName="registry-server" probeResult="failure" output=< Oct 08 16:38:28 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:38:28 crc kubenswrapper[4624]: > Oct 08 16:38:37 crc kubenswrapper[4624]: I1008 16:38:37.039796 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qqx22" Oct 08 16:38:37 crc kubenswrapper[4624]: I1008 16:38:37.101041 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qqx22" Oct 08 16:38:37 crc kubenswrapper[4624]: I1008 16:38:37.284795 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qqx22"] Oct 08 16:38:38 crc kubenswrapper[4624]: I1008 16:38:38.643159 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qqx22" podUID="050003e1-f549-4462-bda0-835ada699169" containerName="registry-server" containerID="cri-o://000a8199f836fa0752053f98054bccfeffe76f0a66b8ce811cac0da89113e3d9" gracePeriod=2 Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.185779 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqx22" Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.367198 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050003e1-f549-4462-bda0-835ada699169-catalog-content\") pod \"050003e1-f549-4462-bda0-835ada699169\" (UID: \"050003e1-f549-4462-bda0-835ada699169\") " Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.367580 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050003e1-f549-4462-bda0-835ada699169-utilities\") pod \"050003e1-f549-4462-bda0-835ada699169\" (UID: \"050003e1-f549-4462-bda0-835ada699169\") " Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.367730 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p4cv\" (UniqueName: \"kubernetes.io/projected/050003e1-f549-4462-bda0-835ada699169-kube-api-access-9p4cv\") pod \"050003e1-f549-4462-bda0-835ada699169\" (UID: \"050003e1-f549-4462-bda0-835ada699169\") " Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.368260 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/050003e1-f549-4462-bda0-835ada699169-utilities" (OuterVolumeSpecName: "utilities") pod "050003e1-f549-4462-bda0-835ada699169" (UID: "050003e1-f549-4462-bda0-835ada699169"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.368383 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050003e1-f549-4462-bda0-835ada699169-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.374906 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050003e1-f549-4462-bda0-835ada699169-kube-api-access-9p4cv" (OuterVolumeSpecName: "kube-api-access-9p4cv") pod "050003e1-f549-4462-bda0-835ada699169" (UID: "050003e1-f549-4462-bda0-835ada699169"). InnerVolumeSpecName "kube-api-access-9p4cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.471984 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p4cv\" (UniqueName: \"kubernetes.io/projected/050003e1-f549-4462-bda0-835ada699169-kube-api-access-9p4cv\") on node \"crc\" DevicePath \"\"" Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.489039 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/050003e1-f549-4462-bda0-835ada699169-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "050003e1-f549-4462-bda0-835ada699169" (UID: "050003e1-f549-4462-bda0-835ada699169"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.575585 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050003e1-f549-4462-bda0-835ada699169-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.655889 4624 generic.go:334] "Generic (PLEG): container finished" podID="050003e1-f549-4462-bda0-835ada699169" containerID="000a8199f836fa0752053f98054bccfeffe76f0a66b8ce811cac0da89113e3d9" exitCode=0 Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.655954 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqx22" event={"ID":"050003e1-f549-4462-bda0-835ada699169","Type":"ContainerDied","Data":"000a8199f836fa0752053f98054bccfeffe76f0a66b8ce811cac0da89113e3d9"} Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.656007 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqx22" Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.656050 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqx22" event={"ID":"050003e1-f549-4462-bda0-835ada699169","Type":"ContainerDied","Data":"38931b9d90fafdefbe06e3201a7876bc946722e88eb925b691dd0dccd6691b1d"} Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.656095 4624 scope.go:117] "RemoveContainer" containerID="000a8199f836fa0752053f98054bccfeffe76f0a66b8ce811cac0da89113e3d9" Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.699036 4624 scope.go:117] "RemoveContainer" containerID="4f0edae303d4f3acffbe9f3c6a0990c4b66cac3e7a015931df89df40dd48e9cb" Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.699192 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qqx22"] Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.708052 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qqx22"] Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.721192 4624 scope.go:117] "RemoveContainer" containerID="14a61364fb2ba48c9688122bc8d498696f09dc79c613ef60967bdb7851628a45" Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.783130 4624 scope.go:117] "RemoveContainer" containerID="000a8199f836fa0752053f98054bccfeffe76f0a66b8ce811cac0da89113e3d9" Oct 08 16:38:39 crc kubenswrapper[4624]: E1008 16:38:39.783872 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"000a8199f836fa0752053f98054bccfeffe76f0a66b8ce811cac0da89113e3d9\": container with ID starting with 000a8199f836fa0752053f98054bccfeffe76f0a66b8ce811cac0da89113e3d9 not found: ID does not exist" containerID="000a8199f836fa0752053f98054bccfeffe76f0a66b8ce811cac0da89113e3d9" Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.783909 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000a8199f836fa0752053f98054bccfeffe76f0a66b8ce811cac0da89113e3d9"} err="failed to get container status \"000a8199f836fa0752053f98054bccfeffe76f0a66b8ce811cac0da89113e3d9\": rpc error: code = NotFound desc = could not find container \"000a8199f836fa0752053f98054bccfeffe76f0a66b8ce811cac0da89113e3d9\": container with ID starting with 000a8199f836fa0752053f98054bccfeffe76f0a66b8ce811cac0da89113e3d9 not found: ID does not exist" Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.783938 4624 scope.go:117] "RemoveContainer" containerID="4f0edae303d4f3acffbe9f3c6a0990c4b66cac3e7a015931df89df40dd48e9cb" Oct 08 16:38:39 crc kubenswrapper[4624]: E1008 16:38:39.784383 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f0edae303d4f3acffbe9f3c6a0990c4b66cac3e7a015931df89df40dd48e9cb\": container with ID starting with 4f0edae303d4f3acffbe9f3c6a0990c4b66cac3e7a015931df89df40dd48e9cb not found: ID does not exist" containerID="4f0edae303d4f3acffbe9f3c6a0990c4b66cac3e7a015931df89df40dd48e9cb" Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.784415 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f0edae303d4f3acffbe9f3c6a0990c4b66cac3e7a015931df89df40dd48e9cb"} err="failed to get container status \"4f0edae303d4f3acffbe9f3c6a0990c4b66cac3e7a015931df89df40dd48e9cb\": rpc error: code = NotFound desc = could not find container \"4f0edae303d4f3acffbe9f3c6a0990c4b66cac3e7a015931df89df40dd48e9cb\": container with ID starting with 4f0edae303d4f3acffbe9f3c6a0990c4b66cac3e7a015931df89df40dd48e9cb not found: ID does not exist" Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.784433 4624 scope.go:117] "RemoveContainer" containerID="14a61364fb2ba48c9688122bc8d498696f09dc79c613ef60967bdb7851628a45" Oct 08 16:38:39 crc kubenswrapper[4624]: E1008 16:38:39.784727 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14a61364fb2ba48c9688122bc8d498696f09dc79c613ef60967bdb7851628a45\": container with ID starting with 14a61364fb2ba48c9688122bc8d498696f09dc79c613ef60967bdb7851628a45 not found: ID does not exist" containerID="14a61364fb2ba48c9688122bc8d498696f09dc79c613ef60967bdb7851628a45" Oct 08 16:38:39 crc kubenswrapper[4624]: I1008 16:38:39.784752 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14a61364fb2ba48c9688122bc8d498696f09dc79c613ef60967bdb7851628a45"} err="failed to get container status \"14a61364fb2ba48c9688122bc8d498696f09dc79c613ef60967bdb7851628a45\": rpc error: code = NotFound desc = could not find container \"14a61364fb2ba48c9688122bc8d498696f09dc79c613ef60967bdb7851628a45\": container with ID starting with 14a61364fb2ba48c9688122bc8d498696f09dc79c613ef60967bdb7851628a45 not found: ID does not exist" Oct 08 16:38:41 crc kubenswrapper[4624]: I1008 16:38:41.480944 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050003e1-f549-4462-bda0-835ada699169" path="/var/lib/kubelet/pods/050003e1-f549-4462-bda0-835ada699169/volumes" Oct 08 16:40:25 crc kubenswrapper[4624]: E1008 16:40:25.696082 4624 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.154:37822->38.102.83.154:39627: write tcp 38.102.83.154:37822->38.102.83.154:39627: write: broken pipe Oct 08 16:40:30 crc kubenswrapper[4624]: I1008 16:40:30.076317 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:40:30 crc kubenswrapper[4624]: I1008 16:40:30.076963 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:41:00 crc kubenswrapper[4624]: I1008 16:41:00.077035 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:41:00 crc kubenswrapper[4624]: I1008 16:41:00.077730 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:41:30 crc kubenswrapper[4624]: I1008 16:41:30.077025 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:41:30 crc kubenswrapper[4624]: I1008 16:41:30.077646 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:41:30 crc kubenswrapper[4624]: I1008 16:41:30.077711 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 16:41:30 crc kubenswrapper[4624]: I1008 16:41:30.078559 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"60127c07f3261acbce8eb1c000b2714eb51105b1ff1e82dccc0745d7d70c12b3"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 16:41:30 crc kubenswrapper[4624]: I1008 16:41:30.078612 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://60127c07f3261acbce8eb1c000b2714eb51105b1ff1e82dccc0745d7d70c12b3" gracePeriod=600 Oct 08 16:41:30 crc kubenswrapper[4624]: I1008 16:41:30.322548 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="60127c07f3261acbce8eb1c000b2714eb51105b1ff1e82dccc0745d7d70c12b3" exitCode=0 Oct 08 16:41:30 crc kubenswrapper[4624]: I1008 16:41:30.322657 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"60127c07f3261acbce8eb1c000b2714eb51105b1ff1e82dccc0745d7d70c12b3"} Oct 08 16:41:30 crc kubenswrapper[4624]: I1008 16:41:30.323155 4624 scope.go:117] "RemoveContainer" containerID="544355a5e4e144375fbb04fafada5b4ac2392c02582fa1de241579f86e5b553c" Oct 08 16:41:31 crc kubenswrapper[4624]: I1008 16:41:31.339992 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f"} Oct 08 16:43:30 crc kubenswrapper[4624]: I1008 16:43:30.076462 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:43:30 crc kubenswrapper[4624]: I1008 16:43:30.077017 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:44:00 crc kubenswrapper[4624]: I1008 16:44:00.076915 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:44:00 crc kubenswrapper[4624]: I1008 16:44:00.077596 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:44:30 crc kubenswrapper[4624]: I1008 16:44:30.076270 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:44:30 crc kubenswrapper[4624]: I1008 16:44:30.076927 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:44:30 crc kubenswrapper[4624]: I1008 16:44:30.076999 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 16:44:30 crc kubenswrapper[4624]: I1008 16:44:30.078045 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 16:44:30 crc kubenswrapper[4624]: I1008 16:44:30.078122 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" gracePeriod=600 Oct 08 16:44:30 crc kubenswrapper[4624]: E1008 16:44:30.210793 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:44:31 crc kubenswrapper[4624]: I1008 16:44:31.108152 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" exitCode=0 Oct 08 16:44:31 crc kubenswrapper[4624]: I1008 16:44:31.108404 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f"} Oct 08 16:44:31 crc kubenswrapper[4624]: I1008 16:44:31.109317 4624 scope.go:117] "RemoveContainer" containerID="60127c07f3261acbce8eb1c000b2714eb51105b1ff1e82dccc0745d7d70c12b3" Oct 08 16:44:31 crc kubenswrapper[4624]: I1008 16:44:31.110324 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:44:31 crc kubenswrapper[4624]: E1008 16:44:31.110861 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:44:39 crc kubenswrapper[4624]: I1008 16:44:39.181051 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bzwbp"] Oct 08 16:44:39 crc kubenswrapper[4624]: E1008 16:44:39.184083 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050003e1-f549-4462-bda0-835ada699169" containerName="extract-content" Oct 08 16:44:39 crc kubenswrapper[4624]: I1008 16:44:39.184129 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="050003e1-f549-4462-bda0-835ada699169" containerName="extract-content" Oct 08 16:44:39 crc kubenswrapper[4624]: E1008 16:44:39.184217 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050003e1-f549-4462-bda0-835ada699169" containerName="extract-utilities" Oct 08 16:44:39 crc kubenswrapper[4624]: I1008 16:44:39.184232 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="050003e1-f549-4462-bda0-835ada699169" containerName="extract-utilities" Oct 08 16:44:39 crc kubenswrapper[4624]: E1008 16:44:39.184249 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050003e1-f549-4462-bda0-835ada699169" containerName="registry-server" Oct 08 16:44:39 crc kubenswrapper[4624]: I1008 16:44:39.184259 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="050003e1-f549-4462-bda0-835ada699169" containerName="registry-server" Oct 08 16:44:39 crc kubenswrapper[4624]: I1008 16:44:39.184791 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="050003e1-f549-4462-bda0-835ada699169" containerName="registry-server" Oct 08 16:44:39 crc kubenswrapper[4624]: I1008 16:44:39.187275 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzwbp" Oct 08 16:44:39 crc kubenswrapper[4624]: I1008 16:44:39.217919 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bzwbp"] Oct 08 16:44:39 crc kubenswrapper[4624]: I1008 16:44:39.316356 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51e59956-19ff-4801-b2c7-7fd87ee35c6b-catalog-content\") pod \"community-operators-bzwbp\" (UID: \"51e59956-19ff-4801-b2c7-7fd87ee35c6b\") " pod="openshift-marketplace/community-operators-bzwbp" Oct 08 16:44:39 crc kubenswrapper[4624]: I1008 16:44:39.316428 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9xqw\" (UniqueName: \"kubernetes.io/projected/51e59956-19ff-4801-b2c7-7fd87ee35c6b-kube-api-access-d9xqw\") pod \"community-operators-bzwbp\" (UID: \"51e59956-19ff-4801-b2c7-7fd87ee35c6b\") " pod="openshift-marketplace/community-operators-bzwbp" Oct 08 16:44:39 crc kubenswrapper[4624]: I1008 16:44:39.316497 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51e59956-19ff-4801-b2c7-7fd87ee35c6b-utilities\") pod \"community-operators-bzwbp\" (UID: \"51e59956-19ff-4801-b2c7-7fd87ee35c6b\") " pod="openshift-marketplace/community-operators-bzwbp" Oct 08 16:44:39 crc kubenswrapper[4624]: I1008 16:44:39.417835 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51e59956-19ff-4801-b2c7-7fd87ee35c6b-utilities\") pod \"community-operators-bzwbp\" (UID: \"51e59956-19ff-4801-b2c7-7fd87ee35c6b\") " pod="openshift-marketplace/community-operators-bzwbp" Oct 08 16:44:39 crc kubenswrapper[4624]: I1008 16:44:39.418029 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51e59956-19ff-4801-b2c7-7fd87ee35c6b-catalog-content\") pod \"community-operators-bzwbp\" (UID: \"51e59956-19ff-4801-b2c7-7fd87ee35c6b\") " pod="openshift-marketplace/community-operators-bzwbp" Oct 08 16:44:39 crc kubenswrapper[4624]: I1008 16:44:39.418072 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9xqw\" (UniqueName: \"kubernetes.io/projected/51e59956-19ff-4801-b2c7-7fd87ee35c6b-kube-api-access-d9xqw\") pod \"community-operators-bzwbp\" (UID: \"51e59956-19ff-4801-b2c7-7fd87ee35c6b\") " pod="openshift-marketplace/community-operators-bzwbp" Oct 08 16:44:39 crc kubenswrapper[4624]: I1008 16:44:39.418440 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51e59956-19ff-4801-b2c7-7fd87ee35c6b-utilities\") pod \"community-operators-bzwbp\" (UID: \"51e59956-19ff-4801-b2c7-7fd87ee35c6b\") " pod="openshift-marketplace/community-operators-bzwbp" Oct 08 16:44:39 crc kubenswrapper[4624]: I1008 16:44:39.418565 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51e59956-19ff-4801-b2c7-7fd87ee35c6b-catalog-content\") pod \"community-operators-bzwbp\" (UID: \"51e59956-19ff-4801-b2c7-7fd87ee35c6b\") " pod="openshift-marketplace/community-operators-bzwbp" Oct 08 16:44:39 crc kubenswrapper[4624]: I1008 16:44:39.443730 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9xqw\" (UniqueName: \"kubernetes.io/projected/51e59956-19ff-4801-b2c7-7fd87ee35c6b-kube-api-access-d9xqw\") pod \"community-operators-bzwbp\" (UID: \"51e59956-19ff-4801-b2c7-7fd87ee35c6b\") " pod="openshift-marketplace/community-operators-bzwbp" Oct 08 16:44:39 crc kubenswrapper[4624]: I1008 16:44:39.520570 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzwbp" Oct 08 16:44:40 crc kubenswrapper[4624]: I1008 16:44:40.354456 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bzwbp"] Oct 08 16:44:41 crc kubenswrapper[4624]: I1008 16:44:41.213012 4624 generic.go:334] "Generic (PLEG): container finished" podID="51e59956-19ff-4801-b2c7-7fd87ee35c6b" containerID="51383f07df4085f54a1f8c9b3901e31d65f7b40d68066c02115444bcb424c2cd" exitCode=0 Oct 08 16:44:41 crc kubenswrapper[4624]: I1008 16:44:41.213236 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzwbp" event={"ID":"51e59956-19ff-4801-b2c7-7fd87ee35c6b","Type":"ContainerDied","Data":"51383f07df4085f54a1f8c9b3901e31d65f7b40d68066c02115444bcb424c2cd"} Oct 08 16:44:41 crc kubenswrapper[4624]: I1008 16:44:41.213269 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzwbp" event={"ID":"51e59956-19ff-4801-b2c7-7fd87ee35c6b","Type":"ContainerStarted","Data":"0030d784cfd0b8619cbd904622a77fa727c45cb0b4e9e6cfe9a718faddc8ae25"} Oct 08 16:44:41 crc kubenswrapper[4624]: I1008 16:44:41.215074 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 16:44:42 crc kubenswrapper[4624]: I1008 16:44:42.239768 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzwbp" event={"ID":"51e59956-19ff-4801-b2c7-7fd87ee35c6b","Type":"ContainerStarted","Data":"2e98bc394627d66f31c7248db4182072a39f169b0c60b3d7bd595538f8f8659f"} Oct 08 16:44:43 crc kubenswrapper[4624]: I1008 16:44:43.466132 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:44:43 crc kubenswrapper[4624]: E1008 16:44:43.466773 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:44:44 crc kubenswrapper[4624]: I1008 16:44:44.262712 4624 generic.go:334] "Generic (PLEG): container finished" podID="51e59956-19ff-4801-b2c7-7fd87ee35c6b" containerID="2e98bc394627d66f31c7248db4182072a39f169b0c60b3d7bd595538f8f8659f" exitCode=0 Oct 08 16:44:44 crc kubenswrapper[4624]: I1008 16:44:44.262776 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzwbp" event={"ID":"51e59956-19ff-4801-b2c7-7fd87ee35c6b","Type":"ContainerDied","Data":"2e98bc394627d66f31c7248db4182072a39f169b0c60b3d7bd595538f8f8659f"} Oct 08 16:44:45 crc kubenswrapper[4624]: I1008 16:44:45.275017 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzwbp" event={"ID":"51e59956-19ff-4801-b2c7-7fd87ee35c6b","Type":"ContainerStarted","Data":"535cba71fc5817a8cfdbc3336b38459598c003c0ff5cf65018e048a4c2bde47f"} Oct 08 16:44:45 crc kubenswrapper[4624]: I1008 16:44:45.295058 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bzwbp" podStartSLOduration=2.6454953469999998 podStartE2EDuration="6.295031079s" podCreationTimestamp="2025-10-08 16:44:39 +0000 UTC" firstStartedPulling="2025-10-08 16:44:41.214876044 +0000 UTC m=+8506.365811121" lastFinishedPulling="2025-10-08 16:44:44.864411776 +0000 UTC m=+8510.015346853" observedRunningTime="2025-10-08 16:44:45.291766715 +0000 UTC m=+8510.442701812" watchObservedRunningTime="2025-10-08 16:44:45.295031079 +0000 UTC m=+8510.445966166" Oct 08 16:44:46 crc kubenswrapper[4624]: I1008 16:44:46.560232 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8tvdg"] Oct 08 16:44:46 crc kubenswrapper[4624]: I1008 16:44:46.563301 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tvdg" Oct 08 16:44:46 crc kubenswrapper[4624]: I1008 16:44:46.581211 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8tvdg"] Oct 08 16:44:46 crc kubenswrapper[4624]: I1008 16:44:46.688656 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95332bf-df71-4495-a2a0-3c67d6839b08-catalog-content\") pod \"certified-operators-8tvdg\" (UID: \"d95332bf-df71-4495-a2a0-3c67d6839b08\") " pod="openshift-marketplace/certified-operators-8tvdg" Oct 08 16:44:46 crc kubenswrapper[4624]: I1008 16:44:46.688779 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95332bf-df71-4495-a2a0-3c67d6839b08-utilities\") pod \"certified-operators-8tvdg\" (UID: \"d95332bf-df71-4495-a2a0-3c67d6839b08\") " pod="openshift-marketplace/certified-operators-8tvdg" Oct 08 16:44:46 crc kubenswrapper[4624]: I1008 16:44:46.688818 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg4qk\" (UniqueName: \"kubernetes.io/projected/d95332bf-df71-4495-a2a0-3c67d6839b08-kube-api-access-zg4qk\") pod \"certified-operators-8tvdg\" (UID: \"d95332bf-df71-4495-a2a0-3c67d6839b08\") " pod="openshift-marketplace/certified-operators-8tvdg" Oct 08 16:44:46 crc kubenswrapper[4624]: I1008 16:44:46.792808 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95332bf-df71-4495-a2a0-3c67d6839b08-catalog-content\") pod \"certified-operators-8tvdg\" (UID: \"d95332bf-df71-4495-a2a0-3c67d6839b08\") " pod="openshift-marketplace/certified-operators-8tvdg" Oct 08 16:44:46 crc kubenswrapper[4624]: I1008 16:44:46.792902 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95332bf-df71-4495-a2a0-3c67d6839b08-utilities\") pod \"certified-operators-8tvdg\" (UID: \"d95332bf-df71-4495-a2a0-3c67d6839b08\") " pod="openshift-marketplace/certified-operators-8tvdg" Oct 08 16:44:46 crc kubenswrapper[4624]: I1008 16:44:46.792942 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg4qk\" (UniqueName: \"kubernetes.io/projected/d95332bf-df71-4495-a2a0-3c67d6839b08-kube-api-access-zg4qk\") pod \"certified-operators-8tvdg\" (UID: \"d95332bf-df71-4495-a2a0-3c67d6839b08\") " pod="openshift-marketplace/certified-operators-8tvdg" Oct 08 16:44:46 crc kubenswrapper[4624]: I1008 16:44:46.793869 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95332bf-df71-4495-a2a0-3c67d6839b08-catalog-content\") pod \"certified-operators-8tvdg\" (UID: \"d95332bf-df71-4495-a2a0-3c67d6839b08\") " pod="openshift-marketplace/certified-operators-8tvdg" Oct 08 16:44:46 crc kubenswrapper[4624]: I1008 16:44:46.794151 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95332bf-df71-4495-a2a0-3c67d6839b08-utilities\") pod \"certified-operators-8tvdg\" (UID: \"d95332bf-df71-4495-a2a0-3c67d6839b08\") " pod="openshift-marketplace/certified-operators-8tvdg" Oct 08 16:44:46 crc kubenswrapper[4624]: I1008 16:44:46.813737 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg4qk\" (UniqueName: \"kubernetes.io/projected/d95332bf-df71-4495-a2a0-3c67d6839b08-kube-api-access-zg4qk\") pod \"certified-operators-8tvdg\" (UID: \"d95332bf-df71-4495-a2a0-3c67d6839b08\") " pod="openshift-marketplace/certified-operators-8tvdg" Oct 08 16:44:46 crc kubenswrapper[4624]: I1008 16:44:46.912303 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tvdg" Oct 08 16:44:47 crc kubenswrapper[4624]: I1008 16:44:47.679555 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8tvdg"] Oct 08 16:44:48 crc kubenswrapper[4624]: I1008 16:44:48.335817 4624 generic.go:334] "Generic (PLEG): container finished" podID="d95332bf-df71-4495-a2a0-3c67d6839b08" containerID="53316aec1b5bce0644b9a13e24e30fde0c02b4a3eab0fc2fa39cbf6c326067ed" exitCode=0 Oct 08 16:44:48 crc kubenswrapper[4624]: I1008 16:44:48.335927 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tvdg" event={"ID":"d95332bf-df71-4495-a2a0-3c67d6839b08","Type":"ContainerDied","Data":"53316aec1b5bce0644b9a13e24e30fde0c02b4a3eab0fc2fa39cbf6c326067ed"} Oct 08 16:44:48 crc kubenswrapper[4624]: I1008 16:44:48.336102 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tvdg" event={"ID":"d95332bf-df71-4495-a2a0-3c67d6839b08","Type":"ContainerStarted","Data":"fc572ce36a7df56b80f8f2a563d2a2274515fd452a56f50e0190e1123451b501"} Oct 08 16:44:49 crc kubenswrapper[4624]: I1008 16:44:49.351474 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tvdg" event={"ID":"d95332bf-df71-4495-a2a0-3c67d6839b08","Type":"ContainerStarted","Data":"83659f774ceb461e73ecc5c949d45174a07d4c8af65f8c9620fb3bbd1e8e362d"} Oct 08 16:44:49 crc kubenswrapper[4624]: I1008 16:44:49.520817 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bzwbp" Oct 08 16:44:49 crc kubenswrapper[4624]: I1008 16:44:49.521095 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bzwbp" Oct 08 16:44:50 crc kubenswrapper[4624]: I1008 16:44:50.584900 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bzwbp" podUID="51e59956-19ff-4801-b2c7-7fd87ee35c6b" containerName="registry-server" probeResult="failure" output=< Oct 08 16:44:50 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:44:50 crc kubenswrapper[4624]: > Oct 08 16:44:51 crc kubenswrapper[4624]: I1008 16:44:51.372174 4624 generic.go:334] "Generic (PLEG): container finished" podID="d95332bf-df71-4495-a2a0-3c67d6839b08" containerID="83659f774ceb461e73ecc5c949d45174a07d4c8af65f8c9620fb3bbd1e8e362d" exitCode=0 Oct 08 16:44:51 crc kubenswrapper[4624]: I1008 16:44:51.372274 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tvdg" event={"ID":"d95332bf-df71-4495-a2a0-3c67d6839b08","Type":"ContainerDied","Data":"83659f774ceb461e73ecc5c949d45174a07d4c8af65f8c9620fb3bbd1e8e362d"} Oct 08 16:44:52 crc kubenswrapper[4624]: I1008 16:44:52.386895 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tvdg" event={"ID":"d95332bf-df71-4495-a2a0-3c67d6839b08","Type":"ContainerStarted","Data":"adc998cebf9c58f1b7fb656c78d186e33e6e60b427ebad7d6b8a079d050279db"} Oct 08 16:44:52 crc kubenswrapper[4624]: I1008 16:44:52.421887 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8tvdg" podStartSLOduration=2.974771285 podStartE2EDuration="6.42185572s" podCreationTimestamp="2025-10-08 16:44:46 +0000 UTC" firstStartedPulling="2025-10-08 16:44:48.337830086 +0000 UTC m=+8513.488765173" lastFinishedPulling="2025-10-08 16:44:51.784914531 +0000 UTC m=+8516.935849608" observedRunningTime="2025-10-08 16:44:52.41520891 +0000 UTC m=+8517.566143987" watchObservedRunningTime="2025-10-08 16:44:52.42185572 +0000 UTC m=+8517.572790807" Oct 08 16:44:55 crc kubenswrapper[4624]: I1008 16:44:55.475796 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:44:55 crc kubenswrapper[4624]: E1008 16:44:55.476439 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:44:56 crc kubenswrapper[4624]: I1008 16:44:56.912770 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8tvdg" Oct 08 16:44:56 crc kubenswrapper[4624]: I1008 16:44:56.913159 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8tvdg" Oct 08 16:44:57 crc kubenswrapper[4624]: I1008 16:44:57.967941 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8tvdg" podUID="d95332bf-df71-4495-a2a0-3c67d6839b08" containerName="registry-server" probeResult="failure" output=< Oct 08 16:44:57 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:44:57 crc kubenswrapper[4624]: > Oct 08 16:44:59 crc kubenswrapper[4624]: I1008 16:44:59.592137 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bzwbp" Oct 08 16:44:59 crc kubenswrapper[4624]: I1008 16:44:59.661017 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bzwbp" Oct 08 16:44:59 crc kubenswrapper[4624]: I1008 16:44:59.842189 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bzwbp"] Oct 08 16:45:00 crc kubenswrapper[4624]: I1008 16:45:00.162838 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc"] Oct 08 16:45:00 crc kubenswrapper[4624]: I1008 16:45:00.164217 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc" Oct 08 16:45:00 crc kubenswrapper[4624]: I1008 16:45:00.180301 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 16:45:00 crc kubenswrapper[4624]: I1008 16:45:00.180508 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 16:45:00 crc kubenswrapper[4624]: I1008 16:45:00.182827 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc"] Oct 08 16:45:00 crc kubenswrapper[4624]: I1008 16:45:00.294203 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d275b8d-58e4-4b0a-a35f-2145c222a141-config-volume\") pod \"collect-profiles-29332365-zthpc\" (UID: \"7d275b8d-58e4-4b0a-a35f-2145c222a141\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc" Oct 08 16:45:00 crc kubenswrapper[4624]: I1008 16:45:00.294595 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d275b8d-58e4-4b0a-a35f-2145c222a141-secret-volume\") pod \"collect-profiles-29332365-zthpc\" (UID: \"7d275b8d-58e4-4b0a-a35f-2145c222a141\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc" Oct 08 16:45:00 crc kubenswrapper[4624]: I1008 16:45:00.294652 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6t5l\" (UniqueName: \"kubernetes.io/projected/7d275b8d-58e4-4b0a-a35f-2145c222a141-kube-api-access-p6t5l\") pod \"collect-profiles-29332365-zthpc\" (UID: \"7d275b8d-58e4-4b0a-a35f-2145c222a141\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc" Oct 08 16:45:00 crc kubenswrapper[4624]: I1008 16:45:00.397410 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d275b8d-58e4-4b0a-a35f-2145c222a141-secret-volume\") pod \"collect-profiles-29332365-zthpc\" (UID: \"7d275b8d-58e4-4b0a-a35f-2145c222a141\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc" Oct 08 16:45:00 crc kubenswrapper[4624]: I1008 16:45:00.397957 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6t5l\" (UniqueName: \"kubernetes.io/projected/7d275b8d-58e4-4b0a-a35f-2145c222a141-kube-api-access-p6t5l\") pod \"collect-profiles-29332365-zthpc\" (UID: \"7d275b8d-58e4-4b0a-a35f-2145c222a141\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc" Oct 08 16:45:00 crc kubenswrapper[4624]: I1008 16:45:00.398454 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d275b8d-58e4-4b0a-a35f-2145c222a141-config-volume\") pod \"collect-profiles-29332365-zthpc\" (UID: \"7d275b8d-58e4-4b0a-a35f-2145c222a141\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc" Oct 08 16:45:00 crc kubenswrapper[4624]: I1008 16:45:00.399885 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d275b8d-58e4-4b0a-a35f-2145c222a141-config-volume\") pod \"collect-profiles-29332365-zthpc\" (UID: \"7d275b8d-58e4-4b0a-a35f-2145c222a141\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc" Oct 08 16:45:00 crc kubenswrapper[4624]: I1008 16:45:00.410330 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d275b8d-58e4-4b0a-a35f-2145c222a141-secret-volume\") pod \"collect-profiles-29332365-zthpc\" (UID: \"7d275b8d-58e4-4b0a-a35f-2145c222a141\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc" Oct 08 16:45:00 crc kubenswrapper[4624]: I1008 16:45:00.414958 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6t5l\" (UniqueName: \"kubernetes.io/projected/7d275b8d-58e4-4b0a-a35f-2145c222a141-kube-api-access-p6t5l\") pod \"collect-profiles-29332365-zthpc\" (UID: \"7d275b8d-58e4-4b0a-a35f-2145c222a141\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc" Oct 08 16:45:00 crc kubenswrapper[4624]: I1008 16:45:00.500847 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc" Oct 08 16:45:01 crc kubenswrapper[4624]: I1008 16:45:01.133845 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc"] Oct 08 16:45:01 crc kubenswrapper[4624]: I1008 16:45:01.477570 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bzwbp" podUID="51e59956-19ff-4801-b2c7-7fd87ee35c6b" containerName="registry-server" containerID="cri-o://535cba71fc5817a8cfdbc3336b38459598c003c0ff5cf65018e048a4c2bde47f" gracePeriod=2 Oct 08 16:45:01 crc kubenswrapper[4624]: I1008 16:45:01.483191 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc" event={"ID":"7d275b8d-58e4-4b0a-a35f-2145c222a141","Type":"ContainerStarted","Data":"005e428b7a0ec765d316e01804ded238a5fda1de952d7ae5d0ab9578e93f1983"} Oct 08 16:45:01 crc kubenswrapper[4624]: I1008 16:45:01.483231 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc" event={"ID":"7d275b8d-58e4-4b0a-a35f-2145c222a141","Type":"ContainerStarted","Data":"61b55d92cb9d1f38f8aa0dfdd478b50910a84890e79dc50aa933e3dd0dcdf378"} Oct 08 16:45:01 crc kubenswrapper[4624]: I1008 16:45:01.517982 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc" podStartSLOduration=1.5179547420000001 podStartE2EDuration="1.517954742s" podCreationTimestamp="2025-10-08 16:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 16:45:01.505071551 +0000 UTC m=+8526.656006628" watchObservedRunningTime="2025-10-08 16:45:01.517954742 +0000 UTC m=+8526.668889819" Oct 08 16:45:01 crc kubenswrapper[4624]: I1008 16:45:01.999022 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-59d86bf959-vq2ld" podUID="d7c08e42-5aca-4394-952c-5649ba096a8f" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.143140 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzwbp" Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.154713 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51e59956-19ff-4801-b2c7-7fd87ee35c6b-catalog-content\") pod \"51e59956-19ff-4801-b2c7-7fd87ee35c6b\" (UID: \"51e59956-19ff-4801-b2c7-7fd87ee35c6b\") " Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.155004 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51e59956-19ff-4801-b2c7-7fd87ee35c6b-utilities\") pod \"51e59956-19ff-4801-b2c7-7fd87ee35c6b\" (UID: \"51e59956-19ff-4801-b2c7-7fd87ee35c6b\") " Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.155036 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9xqw\" (UniqueName: \"kubernetes.io/projected/51e59956-19ff-4801-b2c7-7fd87ee35c6b-kube-api-access-d9xqw\") pod \"51e59956-19ff-4801-b2c7-7fd87ee35c6b\" (UID: \"51e59956-19ff-4801-b2c7-7fd87ee35c6b\") " Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.157007 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51e59956-19ff-4801-b2c7-7fd87ee35c6b-utilities" (OuterVolumeSpecName: "utilities") pod "51e59956-19ff-4801-b2c7-7fd87ee35c6b" (UID: "51e59956-19ff-4801-b2c7-7fd87ee35c6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.171145 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e59956-19ff-4801-b2c7-7fd87ee35c6b-kube-api-access-d9xqw" (OuterVolumeSpecName: "kube-api-access-d9xqw") pod "51e59956-19ff-4801-b2c7-7fd87ee35c6b" (UID: "51e59956-19ff-4801-b2c7-7fd87ee35c6b"). InnerVolumeSpecName "kube-api-access-d9xqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.220128 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51e59956-19ff-4801-b2c7-7fd87ee35c6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51e59956-19ff-4801-b2c7-7fd87ee35c6b" (UID: "51e59956-19ff-4801-b2c7-7fd87ee35c6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.257509 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51e59956-19ff-4801-b2c7-7fd87ee35c6b-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.257540 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9xqw\" (UniqueName: \"kubernetes.io/projected/51e59956-19ff-4801-b2c7-7fd87ee35c6b-kube-api-access-d9xqw\") on node \"crc\" DevicePath \"\"" Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.257553 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51e59956-19ff-4801-b2c7-7fd87ee35c6b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.491383 4624 generic.go:334] "Generic (PLEG): container finished" podID="7d275b8d-58e4-4b0a-a35f-2145c222a141" containerID="005e428b7a0ec765d316e01804ded238a5fda1de952d7ae5d0ab9578e93f1983" exitCode=0 Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.491483 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc" event={"ID":"7d275b8d-58e4-4b0a-a35f-2145c222a141","Type":"ContainerDied","Data":"005e428b7a0ec765d316e01804ded238a5fda1de952d7ae5d0ab9578e93f1983"} Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.495474 4624 generic.go:334] "Generic (PLEG): container finished" podID="51e59956-19ff-4801-b2c7-7fd87ee35c6b" containerID="535cba71fc5817a8cfdbc3336b38459598c003c0ff5cf65018e048a4c2bde47f" exitCode=0 Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.495513 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzwbp" event={"ID":"51e59956-19ff-4801-b2c7-7fd87ee35c6b","Type":"ContainerDied","Data":"535cba71fc5817a8cfdbc3336b38459598c003c0ff5cf65018e048a4c2bde47f"} Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.495540 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzwbp" event={"ID":"51e59956-19ff-4801-b2c7-7fd87ee35c6b","Type":"ContainerDied","Data":"0030d784cfd0b8619cbd904622a77fa727c45cb0b4e9e6cfe9a718faddc8ae25"} Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.495558 4624 scope.go:117] "RemoveContainer" containerID="535cba71fc5817a8cfdbc3336b38459598c003c0ff5cf65018e048a4c2bde47f" Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.495706 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzwbp" Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.534198 4624 scope.go:117] "RemoveContainer" containerID="2e98bc394627d66f31c7248db4182072a39f169b0c60b3d7bd595538f8f8659f" Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.590272 4624 scope.go:117] "RemoveContainer" containerID="51383f07df4085f54a1f8c9b3901e31d65f7b40d68066c02115444bcb424c2cd" Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.596126 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bzwbp"] Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.613005 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bzwbp"] Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.638396 4624 scope.go:117] "RemoveContainer" containerID="535cba71fc5817a8cfdbc3336b38459598c003c0ff5cf65018e048a4c2bde47f" Oct 08 16:45:02 crc kubenswrapper[4624]: E1008 16:45:02.638906 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"535cba71fc5817a8cfdbc3336b38459598c003c0ff5cf65018e048a4c2bde47f\": container with ID starting with 535cba71fc5817a8cfdbc3336b38459598c003c0ff5cf65018e048a4c2bde47f not found: ID does not exist" containerID="535cba71fc5817a8cfdbc3336b38459598c003c0ff5cf65018e048a4c2bde47f" Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.638935 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"535cba71fc5817a8cfdbc3336b38459598c003c0ff5cf65018e048a4c2bde47f"} err="failed to get container status \"535cba71fc5817a8cfdbc3336b38459598c003c0ff5cf65018e048a4c2bde47f\": rpc error: code = NotFound desc = could not find container \"535cba71fc5817a8cfdbc3336b38459598c003c0ff5cf65018e048a4c2bde47f\": container with ID starting with 535cba71fc5817a8cfdbc3336b38459598c003c0ff5cf65018e048a4c2bde47f not found: ID does not exist" Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.638957 4624 scope.go:117] "RemoveContainer" containerID="2e98bc394627d66f31c7248db4182072a39f169b0c60b3d7bd595538f8f8659f" Oct 08 16:45:02 crc kubenswrapper[4624]: E1008 16:45:02.639102 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e98bc394627d66f31c7248db4182072a39f169b0c60b3d7bd595538f8f8659f\": container with ID starting with 2e98bc394627d66f31c7248db4182072a39f169b0c60b3d7bd595538f8f8659f not found: ID does not exist" containerID="2e98bc394627d66f31c7248db4182072a39f169b0c60b3d7bd595538f8f8659f" Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.639120 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e98bc394627d66f31c7248db4182072a39f169b0c60b3d7bd595538f8f8659f"} err="failed to get container status \"2e98bc394627d66f31c7248db4182072a39f169b0c60b3d7bd595538f8f8659f\": rpc error: code = NotFound desc = could not find container \"2e98bc394627d66f31c7248db4182072a39f169b0c60b3d7bd595538f8f8659f\": container with ID starting with 2e98bc394627d66f31c7248db4182072a39f169b0c60b3d7bd595538f8f8659f not found: ID does not exist" Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.639131 4624 scope.go:117] "RemoveContainer" containerID="51383f07df4085f54a1f8c9b3901e31d65f7b40d68066c02115444bcb424c2cd" Oct 08 16:45:02 crc kubenswrapper[4624]: E1008 16:45:02.639250 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51383f07df4085f54a1f8c9b3901e31d65f7b40d68066c02115444bcb424c2cd\": container with ID starting with 51383f07df4085f54a1f8c9b3901e31d65f7b40d68066c02115444bcb424c2cd not found: ID does not exist" containerID="51383f07df4085f54a1f8c9b3901e31d65f7b40d68066c02115444bcb424c2cd" Oct 08 16:45:02 crc kubenswrapper[4624]: I1008 16:45:02.639266 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51383f07df4085f54a1f8c9b3901e31d65f7b40d68066c02115444bcb424c2cd"} err="failed to get container status \"51383f07df4085f54a1f8c9b3901e31d65f7b40d68066c02115444bcb424c2cd\": rpc error: code = NotFound desc = could not find container \"51383f07df4085f54a1f8c9b3901e31d65f7b40d68066c02115444bcb424c2cd\": container with ID starting with 51383f07df4085f54a1f8c9b3901e31d65f7b40d68066c02115444bcb424c2cd not found: ID does not exist" Oct 08 16:45:03 crc kubenswrapper[4624]: I1008 16:45:03.475579 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51e59956-19ff-4801-b2c7-7fd87ee35c6b" path="/var/lib/kubelet/pods/51e59956-19ff-4801-b2c7-7fd87ee35c6b/volumes" Oct 08 16:45:03 crc kubenswrapper[4624]: I1008 16:45:03.946098 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc" Oct 08 16:45:04 crc kubenswrapper[4624]: I1008 16:45:04.125507 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d275b8d-58e4-4b0a-a35f-2145c222a141-secret-volume\") pod \"7d275b8d-58e4-4b0a-a35f-2145c222a141\" (UID: \"7d275b8d-58e4-4b0a-a35f-2145c222a141\") " Oct 08 16:45:04 crc kubenswrapper[4624]: I1008 16:45:04.125566 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6t5l\" (UniqueName: \"kubernetes.io/projected/7d275b8d-58e4-4b0a-a35f-2145c222a141-kube-api-access-p6t5l\") pod \"7d275b8d-58e4-4b0a-a35f-2145c222a141\" (UID: \"7d275b8d-58e4-4b0a-a35f-2145c222a141\") " Oct 08 16:45:04 crc kubenswrapper[4624]: I1008 16:45:04.125716 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d275b8d-58e4-4b0a-a35f-2145c222a141-config-volume\") pod \"7d275b8d-58e4-4b0a-a35f-2145c222a141\" (UID: \"7d275b8d-58e4-4b0a-a35f-2145c222a141\") " Oct 08 16:45:04 crc kubenswrapper[4624]: I1008 16:45:04.126608 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d275b8d-58e4-4b0a-a35f-2145c222a141-config-volume" (OuterVolumeSpecName: "config-volume") pod "7d275b8d-58e4-4b0a-a35f-2145c222a141" (UID: "7d275b8d-58e4-4b0a-a35f-2145c222a141"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 16:45:04 crc kubenswrapper[4624]: I1008 16:45:04.133849 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d275b8d-58e4-4b0a-a35f-2145c222a141-kube-api-access-p6t5l" (OuterVolumeSpecName: "kube-api-access-p6t5l") pod "7d275b8d-58e4-4b0a-a35f-2145c222a141" (UID: "7d275b8d-58e4-4b0a-a35f-2145c222a141"). InnerVolumeSpecName "kube-api-access-p6t5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:45:04 crc kubenswrapper[4624]: I1008 16:45:04.134979 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d275b8d-58e4-4b0a-a35f-2145c222a141-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7d275b8d-58e4-4b0a-a35f-2145c222a141" (UID: "7d275b8d-58e4-4b0a-a35f-2145c222a141"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:45:04 crc kubenswrapper[4624]: I1008 16:45:04.227712 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d275b8d-58e4-4b0a-a35f-2145c222a141-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 16:45:04 crc kubenswrapper[4624]: I1008 16:45:04.227749 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6t5l\" (UniqueName: \"kubernetes.io/projected/7d275b8d-58e4-4b0a-a35f-2145c222a141-kube-api-access-p6t5l\") on node \"crc\" DevicePath \"\"" Oct 08 16:45:04 crc kubenswrapper[4624]: I1008 16:45:04.227759 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d275b8d-58e4-4b0a-a35f-2145c222a141-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 16:45:04 crc kubenswrapper[4624]: I1008 16:45:04.538962 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc" Oct 08 16:45:04 crc kubenswrapper[4624]: I1008 16:45:04.538897 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc" event={"ID":"7d275b8d-58e4-4b0a-a35f-2145c222a141","Type":"ContainerDied","Data":"61b55d92cb9d1f38f8aa0dfdd478b50910a84890e79dc50aa933e3dd0dcdf378"} Oct 08 16:45:04 crc kubenswrapper[4624]: I1008 16:45:04.539566 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61b55d92cb9d1f38f8aa0dfdd478b50910a84890e79dc50aa933e3dd0dcdf378" Oct 08 16:45:04 crc kubenswrapper[4624]: I1008 16:45:04.613175 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h"] Oct 08 16:45:04 crc kubenswrapper[4624]: I1008 16:45:04.622165 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332320-zq75h"] Oct 08 16:45:05 crc kubenswrapper[4624]: I1008 16:45:05.477358 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5295ef73-9c6e-416b-9961-90699752fad3" path="/var/lib/kubelet/pods/5295ef73-9c6e-416b-9961-90699752fad3/volumes" Oct 08 16:45:06 crc kubenswrapper[4624]: I1008 16:45:06.975845 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8tvdg" Oct 08 16:45:07 crc kubenswrapper[4624]: I1008 16:45:07.037521 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8tvdg" Oct 08 16:45:07 crc kubenswrapper[4624]: I1008 16:45:07.219003 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8tvdg"] Oct 08 16:45:08 crc kubenswrapper[4624]: I1008 16:45:08.574898 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8tvdg" podUID="d95332bf-df71-4495-a2a0-3c67d6839b08" containerName="registry-server" containerID="cri-o://adc998cebf9c58f1b7fb656c78d186e33e6e60b427ebad7d6b8a079d050279db" gracePeriod=2 Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.175780 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tvdg" Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.340961 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95332bf-df71-4495-a2a0-3c67d6839b08-utilities\") pod \"d95332bf-df71-4495-a2a0-3c67d6839b08\" (UID: \"d95332bf-df71-4495-a2a0-3c67d6839b08\") " Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.341065 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg4qk\" (UniqueName: \"kubernetes.io/projected/d95332bf-df71-4495-a2a0-3c67d6839b08-kube-api-access-zg4qk\") pod \"d95332bf-df71-4495-a2a0-3c67d6839b08\" (UID: \"d95332bf-df71-4495-a2a0-3c67d6839b08\") " Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.341383 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95332bf-df71-4495-a2a0-3c67d6839b08-catalog-content\") pod \"d95332bf-df71-4495-a2a0-3c67d6839b08\" (UID: \"d95332bf-df71-4495-a2a0-3c67d6839b08\") " Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.342092 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d95332bf-df71-4495-a2a0-3c67d6839b08-utilities" (OuterVolumeSpecName: "utilities") pod "d95332bf-df71-4495-a2a0-3c67d6839b08" (UID: "d95332bf-df71-4495-a2a0-3c67d6839b08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.342998 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95332bf-df71-4495-a2a0-3c67d6839b08-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.347969 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d95332bf-df71-4495-a2a0-3c67d6839b08-kube-api-access-zg4qk" (OuterVolumeSpecName: "kube-api-access-zg4qk") pod "d95332bf-df71-4495-a2a0-3c67d6839b08" (UID: "d95332bf-df71-4495-a2a0-3c67d6839b08"). InnerVolumeSpecName "kube-api-access-zg4qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.394198 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d95332bf-df71-4495-a2a0-3c67d6839b08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d95332bf-df71-4495-a2a0-3c67d6839b08" (UID: "d95332bf-df71-4495-a2a0-3c67d6839b08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.445027 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg4qk\" (UniqueName: \"kubernetes.io/projected/d95332bf-df71-4495-a2a0-3c67d6839b08-kube-api-access-zg4qk\") on node \"crc\" DevicePath \"\"" Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.445071 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95332bf-df71-4495-a2a0-3c67d6839b08-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.466208 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:45:09 crc kubenswrapper[4624]: E1008 16:45:09.466709 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.587599 4624 generic.go:334] "Generic (PLEG): container finished" podID="d95332bf-df71-4495-a2a0-3c67d6839b08" containerID="adc998cebf9c58f1b7fb656c78d186e33e6e60b427ebad7d6b8a079d050279db" exitCode=0 Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.587681 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tvdg" event={"ID":"d95332bf-df71-4495-a2a0-3c67d6839b08","Type":"ContainerDied","Data":"adc998cebf9c58f1b7fb656c78d186e33e6e60b427ebad7d6b8a079d050279db"} Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.587713 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tvdg" event={"ID":"d95332bf-df71-4495-a2a0-3c67d6839b08","Type":"ContainerDied","Data":"fc572ce36a7df56b80f8f2a563d2a2274515fd452a56f50e0190e1123451b501"} Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.587731 4624 scope.go:117] "RemoveContainer" containerID="adc998cebf9c58f1b7fb656c78d186e33e6e60b427ebad7d6b8a079d050279db" Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.587868 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tvdg" Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.620850 4624 scope.go:117] "RemoveContainer" containerID="83659f774ceb461e73ecc5c949d45174a07d4c8af65f8c9620fb3bbd1e8e362d" Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.623522 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8tvdg"] Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.640185 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8tvdg"] Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.646328 4624 scope.go:117] "RemoveContainer" containerID="53316aec1b5bce0644b9a13e24e30fde0c02b4a3eab0fc2fa39cbf6c326067ed" Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.689848 4624 scope.go:117] "RemoveContainer" containerID="adc998cebf9c58f1b7fb656c78d186e33e6e60b427ebad7d6b8a079d050279db" Oct 08 16:45:09 crc kubenswrapper[4624]: E1008 16:45:09.690366 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adc998cebf9c58f1b7fb656c78d186e33e6e60b427ebad7d6b8a079d050279db\": container with ID starting with adc998cebf9c58f1b7fb656c78d186e33e6e60b427ebad7d6b8a079d050279db not found: ID does not exist" containerID="adc998cebf9c58f1b7fb656c78d186e33e6e60b427ebad7d6b8a079d050279db" Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.690433 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adc998cebf9c58f1b7fb656c78d186e33e6e60b427ebad7d6b8a079d050279db"} err="failed to get container status \"adc998cebf9c58f1b7fb656c78d186e33e6e60b427ebad7d6b8a079d050279db\": rpc error: code = NotFound desc = could not find container \"adc998cebf9c58f1b7fb656c78d186e33e6e60b427ebad7d6b8a079d050279db\": container with ID starting with adc998cebf9c58f1b7fb656c78d186e33e6e60b427ebad7d6b8a079d050279db not found: ID does not exist" Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.690463 4624 scope.go:117] "RemoveContainer" containerID="83659f774ceb461e73ecc5c949d45174a07d4c8af65f8c9620fb3bbd1e8e362d" Oct 08 16:45:09 crc kubenswrapper[4624]: E1008 16:45:09.690819 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83659f774ceb461e73ecc5c949d45174a07d4c8af65f8c9620fb3bbd1e8e362d\": container with ID starting with 83659f774ceb461e73ecc5c949d45174a07d4c8af65f8c9620fb3bbd1e8e362d not found: ID does not exist" containerID="83659f774ceb461e73ecc5c949d45174a07d4c8af65f8c9620fb3bbd1e8e362d" Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.690854 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83659f774ceb461e73ecc5c949d45174a07d4c8af65f8c9620fb3bbd1e8e362d"} err="failed to get container status \"83659f774ceb461e73ecc5c949d45174a07d4c8af65f8c9620fb3bbd1e8e362d\": rpc error: code = NotFound desc = could not find container \"83659f774ceb461e73ecc5c949d45174a07d4c8af65f8c9620fb3bbd1e8e362d\": container with ID starting with 83659f774ceb461e73ecc5c949d45174a07d4c8af65f8c9620fb3bbd1e8e362d not found: ID does not exist" Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.690879 4624 scope.go:117] "RemoveContainer" containerID="53316aec1b5bce0644b9a13e24e30fde0c02b4a3eab0fc2fa39cbf6c326067ed" Oct 08 16:45:09 crc kubenswrapper[4624]: E1008 16:45:09.691570 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53316aec1b5bce0644b9a13e24e30fde0c02b4a3eab0fc2fa39cbf6c326067ed\": container with ID starting with 53316aec1b5bce0644b9a13e24e30fde0c02b4a3eab0fc2fa39cbf6c326067ed not found: ID does not exist" containerID="53316aec1b5bce0644b9a13e24e30fde0c02b4a3eab0fc2fa39cbf6c326067ed" Oct 08 16:45:09 crc kubenswrapper[4624]: I1008 16:45:09.691597 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53316aec1b5bce0644b9a13e24e30fde0c02b4a3eab0fc2fa39cbf6c326067ed"} err="failed to get container status \"53316aec1b5bce0644b9a13e24e30fde0c02b4a3eab0fc2fa39cbf6c326067ed\": rpc error: code = NotFound desc = could not find container \"53316aec1b5bce0644b9a13e24e30fde0c02b4a3eab0fc2fa39cbf6c326067ed\": container with ID starting with 53316aec1b5bce0644b9a13e24e30fde0c02b4a3eab0fc2fa39cbf6c326067ed not found: ID does not exist" Oct 08 16:45:11 crc kubenswrapper[4624]: I1008 16:45:11.480858 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d95332bf-df71-4495-a2a0-3c67d6839b08" path="/var/lib/kubelet/pods/d95332bf-df71-4495-a2a0-3c67d6839b08/volumes" Oct 08 16:45:22 crc kubenswrapper[4624]: I1008 16:45:22.465814 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:45:22 crc kubenswrapper[4624]: E1008 16:45:22.466598 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:45:37 crc kubenswrapper[4624]: I1008 16:45:37.466146 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:45:37 crc kubenswrapper[4624]: E1008 16:45:37.466968 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:45:46 crc kubenswrapper[4624]: I1008 16:45:46.900805 4624 scope.go:117] "RemoveContainer" containerID="760fe869d9feea8d1fb3a232fe6b2ba218567d8c22d7b13237826224970a04f2" Oct 08 16:45:49 crc kubenswrapper[4624]: I1008 16:45:49.467516 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:45:49 crc kubenswrapper[4624]: E1008 16:45:49.468760 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:46:00 crc kubenswrapper[4624]: I1008 16:46:00.466407 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:46:00 crc kubenswrapper[4624]: E1008 16:46:00.467429 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:46:12 crc kubenswrapper[4624]: I1008 16:46:12.466447 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:46:12 crc kubenswrapper[4624]: E1008 16:46:12.467243 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:46:26 crc kubenswrapper[4624]: I1008 16:46:26.466835 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:46:26 crc kubenswrapper[4624]: E1008 16:46:26.467892 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:46:40 crc kubenswrapper[4624]: I1008 16:46:40.467820 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:46:40 crc kubenswrapper[4624]: E1008 16:46:40.468847 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:46:53 crc kubenswrapper[4624]: I1008 16:46:53.466281 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:46:53 crc kubenswrapper[4624]: E1008 16:46:53.467140 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.393784 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w8sbh"] Oct 08 16:46:54 crc kubenswrapper[4624]: E1008 16:46:54.394667 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e59956-19ff-4801-b2c7-7fd87ee35c6b" containerName="extract-utilities" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.394692 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e59956-19ff-4801-b2c7-7fd87ee35c6b" containerName="extract-utilities" Oct 08 16:46:54 crc kubenswrapper[4624]: E1008 16:46:54.394710 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95332bf-df71-4495-a2a0-3c67d6839b08" containerName="registry-server" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.394719 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95332bf-df71-4495-a2a0-3c67d6839b08" containerName="registry-server" Oct 08 16:46:54 crc kubenswrapper[4624]: E1008 16:46:54.394732 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d275b8d-58e4-4b0a-a35f-2145c222a141" containerName="collect-profiles" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.394740 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d275b8d-58e4-4b0a-a35f-2145c222a141" containerName="collect-profiles" Oct 08 16:46:54 crc kubenswrapper[4624]: E1008 16:46:54.394778 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e59956-19ff-4801-b2c7-7fd87ee35c6b" containerName="extract-content" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.394788 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e59956-19ff-4801-b2c7-7fd87ee35c6b" containerName="extract-content" Oct 08 16:46:54 crc kubenswrapper[4624]: E1008 16:46:54.394807 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e59956-19ff-4801-b2c7-7fd87ee35c6b" containerName="registry-server" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.394814 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e59956-19ff-4801-b2c7-7fd87ee35c6b" containerName="registry-server" Oct 08 16:46:54 crc kubenswrapper[4624]: E1008 16:46:54.394844 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95332bf-df71-4495-a2a0-3c67d6839b08" containerName="extract-utilities" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.394851 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95332bf-df71-4495-a2a0-3c67d6839b08" containerName="extract-utilities" Oct 08 16:46:54 crc kubenswrapper[4624]: E1008 16:46:54.394859 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95332bf-df71-4495-a2a0-3c67d6839b08" containerName="extract-content" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.394866 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95332bf-df71-4495-a2a0-3c67d6839b08" containerName="extract-content" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.395152 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d95332bf-df71-4495-a2a0-3c67d6839b08" containerName="registry-server" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.395184 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e59956-19ff-4801-b2c7-7fd87ee35c6b" containerName="registry-server" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.395202 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d275b8d-58e4-4b0a-a35f-2145c222a141" containerName="collect-profiles" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.401497 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8sbh" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.428307 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8sbh"] Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.500382 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/879aa443-ec2f-4b68-8f24-242f95f09c7e-catalog-content\") pod \"redhat-marketplace-w8sbh\" (UID: \"879aa443-ec2f-4b68-8f24-242f95f09c7e\") " pod="openshift-marketplace/redhat-marketplace-w8sbh" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.500817 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/879aa443-ec2f-4b68-8f24-242f95f09c7e-utilities\") pod \"redhat-marketplace-w8sbh\" (UID: \"879aa443-ec2f-4b68-8f24-242f95f09c7e\") " pod="openshift-marketplace/redhat-marketplace-w8sbh" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.500995 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjl9f\" (UniqueName: \"kubernetes.io/projected/879aa443-ec2f-4b68-8f24-242f95f09c7e-kube-api-access-fjl9f\") pod \"redhat-marketplace-w8sbh\" (UID: \"879aa443-ec2f-4b68-8f24-242f95f09c7e\") " pod="openshift-marketplace/redhat-marketplace-w8sbh" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.602688 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/879aa443-ec2f-4b68-8f24-242f95f09c7e-catalog-content\") pod \"redhat-marketplace-w8sbh\" (UID: \"879aa443-ec2f-4b68-8f24-242f95f09c7e\") " pod="openshift-marketplace/redhat-marketplace-w8sbh" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.602816 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/879aa443-ec2f-4b68-8f24-242f95f09c7e-utilities\") pod \"redhat-marketplace-w8sbh\" (UID: \"879aa443-ec2f-4b68-8f24-242f95f09c7e\") " pod="openshift-marketplace/redhat-marketplace-w8sbh" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.602868 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjl9f\" (UniqueName: \"kubernetes.io/projected/879aa443-ec2f-4b68-8f24-242f95f09c7e-kube-api-access-fjl9f\") pod \"redhat-marketplace-w8sbh\" (UID: \"879aa443-ec2f-4b68-8f24-242f95f09c7e\") " pod="openshift-marketplace/redhat-marketplace-w8sbh" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.603404 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/879aa443-ec2f-4b68-8f24-242f95f09c7e-utilities\") pod \"redhat-marketplace-w8sbh\" (UID: \"879aa443-ec2f-4b68-8f24-242f95f09c7e\") " pod="openshift-marketplace/redhat-marketplace-w8sbh" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.603399 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/879aa443-ec2f-4b68-8f24-242f95f09c7e-catalog-content\") pod \"redhat-marketplace-w8sbh\" (UID: \"879aa443-ec2f-4b68-8f24-242f95f09c7e\") " pod="openshift-marketplace/redhat-marketplace-w8sbh" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.626735 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjl9f\" (UniqueName: \"kubernetes.io/projected/879aa443-ec2f-4b68-8f24-242f95f09c7e-kube-api-access-fjl9f\") pod \"redhat-marketplace-w8sbh\" (UID: \"879aa443-ec2f-4b68-8f24-242f95f09c7e\") " pod="openshift-marketplace/redhat-marketplace-w8sbh" Oct 08 16:46:54 crc kubenswrapper[4624]: I1008 16:46:54.752103 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8sbh" Oct 08 16:46:55 crc kubenswrapper[4624]: I1008 16:46:55.263901 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8sbh"] Oct 08 16:46:55 crc kubenswrapper[4624]: W1008 16:46:55.276393 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod879aa443_ec2f_4b68_8f24_242f95f09c7e.slice/crio-8ff6c989aa06fbf6c26483ac02abfcf645e4a2c974fe8797a519169a2aba2f12 WatchSource:0}: Error finding container 8ff6c989aa06fbf6c26483ac02abfcf645e4a2c974fe8797a519169a2aba2f12: Status 404 returned error can't find the container with id 8ff6c989aa06fbf6c26483ac02abfcf645e4a2c974fe8797a519169a2aba2f12 Oct 08 16:46:55 crc kubenswrapper[4624]: I1008 16:46:55.634451 4624 generic.go:334] "Generic (PLEG): container finished" podID="879aa443-ec2f-4b68-8f24-242f95f09c7e" containerID="0a00b197c88a84243a6b8b8dc2afba371f1d60d439182729c51f52e760f7a87d" exitCode=0 Oct 08 16:46:55 crc kubenswrapper[4624]: I1008 16:46:55.634733 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8sbh" event={"ID":"879aa443-ec2f-4b68-8f24-242f95f09c7e","Type":"ContainerDied","Data":"0a00b197c88a84243a6b8b8dc2afba371f1d60d439182729c51f52e760f7a87d"} Oct 08 16:46:55 crc kubenswrapper[4624]: I1008 16:46:55.634762 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8sbh" event={"ID":"879aa443-ec2f-4b68-8f24-242f95f09c7e","Type":"ContainerStarted","Data":"8ff6c989aa06fbf6c26483ac02abfcf645e4a2c974fe8797a519169a2aba2f12"} Oct 08 16:46:56 crc kubenswrapper[4624]: I1008 16:46:56.645768 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8sbh" event={"ID":"879aa443-ec2f-4b68-8f24-242f95f09c7e","Type":"ContainerStarted","Data":"ea06a0d027c849a64c490d8f30c01448488553112c6f68388448b5a2a06c8532"} Oct 08 16:46:57 crc kubenswrapper[4624]: I1008 16:46:57.658086 4624 generic.go:334] "Generic (PLEG): container finished" podID="879aa443-ec2f-4b68-8f24-242f95f09c7e" containerID="ea06a0d027c849a64c490d8f30c01448488553112c6f68388448b5a2a06c8532" exitCode=0 Oct 08 16:46:57 crc kubenswrapper[4624]: I1008 16:46:57.658135 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8sbh" event={"ID":"879aa443-ec2f-4b68-8f24-242f95f09c7e","Type":"ContainerDied","Data":"ea06a0d027c849a64c490d8f30c01448488553112c6f68388448b5a2a06c8532"} Oct 08 16:46:58 crc kubenswrapper[4624]: I1008 16:46:58.670439 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8sbh" event={"ID":"879aa443-ec2f-4b68-8f24-242f95f09c7e","Type":"ContainerStarted","Data":"ec50a9f9c72ee2f7d3f0ff4cf3cc2e036e20297060279db7f4576d87488fd996"} Oct 08 16:46:58 crc kubenswrapper[4624]: I1008 16:46:58.695447 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w8sbh" podStartSLOduration=2.165448585 podStartE2EDuration="4.695421497s" podCreationTimestamp="2025-10-08 16:46:54 +0000 UTC" firstStartedPulling="2025-10-08 16:46:55.63778079 +0000 UTC m=+8640.788715867" lastFinishedPulling="2025-10-08 16:46:58.167753692 +0000 UTC m=+8643.318688779" observedRunningTime="2025-10-08 16:46:58.690021719 +0000 UTC m=+8643.840956816" watchObservedRunningTime="2025-10-08 16:46:58.695421497 +0000 UTC m=+8643.846356574" Oct 08 16:47:04 crc kubenswrapper[4624]: I1008 16:47:04.465541 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:47:04 crc kubenswrapper[4624]: E1008 16:47:04.466228 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:47:04 crc kubenswrapper[4624]: I1008 16:47:04.752403 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w8sbh" Oct 08 16:47:04 crc kubenswrapper[4624]: I1008 16:47:04.752455 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w8sbh" Oct 08 16:47:04 crc kubenswrapper[4624]: I1008 16:47:04.821032 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w8sbh" Oct 08 16:47:05 crc kubenswrapper[4624]: I1008 16:47:05.794305 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w8sbh" Oct 08 16:47:05 crc kubenswrapper[4624]: I1008 16:47:05.841059 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8sbh"] Oct 08 16:47:07 crc kubenswrapper[4624]: I1008 16:47:07.762899 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w8sbh" podUID="879aa443-ec2f-4b68-8f24-242f95f09c7e" containerName="registry-server" containerID="cri-o://ec50a9f9c72ee2f7d3f0ff4cf3cc2e036e20297060279db7f4576d87488fd996" gracePeriod=2 Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.348198 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8sbh" Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.517310 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/879aa443-ec2f-4b68-8f24-242f95f09c7e-utilities\") pod \"879aa443-ec2f-4b68-8f24-242f95f09c7e\" (UID: \"879aa443-ec2f-4b68-8f24-242f95f09c7e\") " Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.517535 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjl9f\" (UniqueName: \"kubernetes.io/projected/879aa443-ec2f-4b68-8f24-242f95f09c7e-kube-api-access-fjl9f\") pod \"879aa443-ec2f-4b68-8f24-242f95f09c7e\" (UID: \"879aa443-ec2f-4b68-8f24-242f95f09c7e\") " Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.517806 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/879aa443-ec2f-4b68-8f24-242f95f09c7e-catalog-content\") pod \"879aa443-ec2f-4b68-8f24-242f95f09c7e\" (UID: \"879aa443-ec2f-4b68-8f24-242f95f09c7e\") " Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.518305 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/879aa443-ec2f-4b68-8f24-242f95f09c7e-utilities" (OuterVolumeSpecName: "utilities") pod "879aa443-ec2f-4b68-8f24-242f95f09c7e" (UID: "879aa443-ec2f-4b68-8f24-242f95f09c7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.523285 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879aa443-ec2f-4b68-8f24-242f95f09c7e-kube-api-access-fjl9f" (OuterVolumeSpecName: "kube-api-access-fjl9f") pod "879aa443-ec2f-4b68-8f24-242f95f09c7e" (UID: "879aa443-ec2f-4b68-8f24-242f95f09c7e"). InnerVolumeSpecName "kube-api-access-fjl9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.532531 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/879aa443-ec2f-4b68-8f24-242f95f09c7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "879aa443-ec2f-4b68-8f24-242f95f09c7e" (UID: "879aa443-ec2f-4b68-8f24-242f95f09c7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.620918 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjl9f\" (UniqueName: \"kubernetes.io/projected/879aa443-ec2f-4b68-8f24-242f95f09c7e-kube-api-access-fjl9f\") on node \"crc\" DevicePath \"\"" Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.620958 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/879aa443-ec2f-4b68-8f24-242f95f09c7e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.620968 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/879aa443-ec2f-4b68-8f24-242f95f09c7e-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.779206 4624 generic.go:334] "Generic (PLEG): container finished" podID="879aa443-ec2f-4b68-8f24-242f95f09c7e" containerID="ec50a9f9c72ee2f7d3f0ff4cf3cc2e036e20297060279db7f4576d87488fd996" exitCode=0 Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.779294 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8sbh" event={"ID":"879aa443-ec2f-4b68-8f24-242f95f09c7e","Type":"ContainerDied","Data":"ec50a9f9c72ee2f7d3f0ff4cf3cc2e036e20297060279db7f4576d87488fd996"} Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.779358 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8sbh" event={"ID":"879aa443-ec2f-4b68-8f24-242f95f09c7e","Type":"ContainerDied","Data":"8ff6c989aa06fbf6c26483ac02abfcf645e4a2c974fe8797a519169a2aba2f12"} Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.779406 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8sbh" Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.779405 4624 scope.go:117] "RemoveContainer" containerID="ec50a9f9c72ee2f7d3f0ff4cf3cc2e036e20297060279db7f4576d87488fd996" Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.824433 4624 scope.go:117] "RemoveContainer" containerID="ea06a0d027c849a64c490d8f30c01448488553112c6f68388448b5a2a06c8532" Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.845495 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8sbh"] Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.859420 4624 scope.go:117] "RemoveContainer" containerID="0a00b197c88a84243a6b8b8dc2afba371f1d60d439182729c51f52e760f7a87d" Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.864194 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8sbh"] Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.898720 4624 scope.go:117] "RemoveContainer" containerID="ec50a9f9c72ee2f7d3f0ff4cf3cc2e036e20297060279db7f4576d87488fd996" Oct 08 16:47:08 crc kubenswrapper[4624]: E1008 16:47:08.899457 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec50a9f9c72ee2f7d3f0ff4cf3cc2e036e20297060279db7f4576d87488fd996\": container with ID starting with ec50a9f9c72ee2f7d3f0ff4cf3cc2e036e20297060279db7f4576d87488fd996 not found: ID does not exist" containerID="ec50a9f9c72ee2f7d3f0ff4cf3cc2e036e20297060279db7f4576d87488fd996" Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.899513 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec50a9f9c72ee2f7d3f0ff4cf3cc2e036e20297060279db7f4576d87488fd996"} err="failed to get container status \"ec50a9f9c72ee2f7d3f0ff4cf3cc2e036e20297060279db7f4576d87488fd996\": rpc error: code = NotFound desc = could not find container \"ec50a9f9c72ee2f7d3f0ff4cf3cc2e036e20297060279db7f4576d87488fd996\": container with ID starting with ec50a9f9c72ee2f7d3f0ff4cf3cc2e036e20297060279db7f4576d87488fd996 not found: ID does not exist" Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.899552 4624 scope.go:117] "RemoveContainer" containerID="ea06a0d027c849a64c490d8f30c01448488553112c6f68388448b5a2a06c8532" Oct 08 16:47:08 crc kubenswrapper[4624]: E1008 16:47:08.900066 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea06a0d027c849a64c490d8f30c01448488553112c6f68388448b5a2a06c8532\": container with ID starting with ea06a0d027c849a64c490d8f30c01448488553112c6f68388448b5a2a06c8532 not found: ID does not exist" containerID="ea06a0d027c849a64c490d8f30c01448488553112c6f68388448b5a2a06c8532" Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.900127 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea06a0d027c849a64c490d8f30c01448488553112c6f68388448b5a2a06c8532"} err="failed to get container status \"ea06a0d027c849a64c490d8f30c01448488553112c6f68388448b5a2a06c8532\": rpc error: code = NotFound desc = could not find container \"ea06a0d027c849a64c490d8f30c01448488553112c6f68388448b5a2a06c8532\": container with ID starting with ea06a0d027c849a64c490d8f30c01448488553112c6f68388448b5a2a06c8532 not found: ID does not exist" Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.900170 4624 scope.go:117] "RemoveContainer" containerID="0a00b197c88a84243a6b8b8dc2afba371f1d60d439182729c51f52e760f7a87d" Oct 08 16:47:08 crc kubenswrapper[4624]: E1008 16:47:08.900681 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a00b197c88a84243a6b8b8dc2afba371f1d60d439182729c51f52e760f7a87d\": container with ID starting with 0a00b197c88a84243a6b8b8dc2afba371f1d60d439182729c51f52e760f7a87d not found: ID does not exist" containerID="0a00b197c88a84243a6b8b8dc2afba371f1d60d439182729c51f52e760f7a87d" Oct 08 16:47:08 crc kubenswrapper[4624]: I1008 16:47:08.900711 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a00b197c88a84243a6b8b8dc2afba371f1d60d439182729c51f52e760f7a87d"} err="failed to get container status \"0a00b197c88a84243a6b8b8dc2afba371f1d60d439182729c51f52e760f7a87d\": rpc error: code = NotFound desc = could not find container \"0a00b197c88a84243a6b8b8dc2afba371f1d60d439182729c51f52e760f7a87d\": container with ID starting with 0a00b197c88a84243a6b8b8dc2afba371f1d60d439182729c51f52e760f7a87d not found: ID does not exist" Oct 08 16:47:09 crc kubenswrapper[4624]: I1008 16:47:09.481779 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="879aa443-ec2f-4b68-8f24-242f95f09c7e" path="/var/lib/kubelet/pods/879aa443-ec2f-4b68-8f24-242f95f09c7e/volumes" Oct 08 16:47:19 crc kubenswrapper[4624]: I1008 16:47:19.466478 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:47:19 crc kubenswrapper[4624]: E1008 16:47:19.467274 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:47:33 crc kubenswrapper[4624]: I1008 16:47:33.466406 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:47:33 crc kubenswrapper[4624]: E1008 16:47:33.467220 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:47:47 crc kubenswrapper[4624]: I1008 16:47:47.466428 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:47:47 crc kubenswrapper[4624]: E1008 16:47:47.467197 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:48:02 crc kubenswrapper[4624]: I1008 16:48:02.468201 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:48:02 crc kubenswrapper[4624]: E1008 16:48:02.468985 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:48:11 crc kubenswrapper[4624]: I1008 16:48:11.759116 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5smsq"] Oct 08 16:48:11 crc kubenswrapper[4624]: E1008 16:48:11.764201 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879aa443-ec2f-4b68-8f24-242f95f09c7e" containerName="extract-utilities" Oct 08 16:48:11 crc kubenswrapper[4624]: I1008 16:48:11.764243 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="879aa443-ec2f-4b68-8f24-242f95f09c7e" containerName="extract-utilities" Oct 08 16:48:11 crc kubenswrapper[4624]: E1008 16:48:11.764260 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879aa443-ec2f-4b68-8f24-242f95f09c7e" containerName="registry-server" Oct 08 16:48:11 crc kubenswrapper[4624]: I1008 16:48:11.764268 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="879aa443-ec2f-4b68-8f24-242f95f09c7e" containerName="registry-server" Oct 08 16:48:11 crc kubenswrapper[4624]: E1008 16:48:11.764293 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879aa443-ec2f-4b68-8f24-242f95f09c7e" containerName="extract-content" Oct 08 16:48:11 crc kubenswrapper[4624]: I1008 16:48:11.764300 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="879aa443-ec2f-4b68-8f24-242f95f09c7e" containerName="extract-content" Oct 08 16:48:11 crc kubenswrapper[4624]: I1008 16:48:11.764546 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="879aa443-ec2f-4b68-8f24-242f95f09c7e" containerName="registry-server" Oct 08 16:48:11 crc kubenswrapper[4624]: I1008 16:48:11.766480 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5smsq" Oct 08 16:48:11 crc kubenswrapper[4624]: I1008 16:48:11.777914 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5smsq"] Oct 08 16:48:11 crc kubenswrapper[4624]: I1008 16:48:11.864943 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6530a74-e339-4deb-b947-5fbc702308e3-utilities\") pod \"redhat-operators-5smsq\" (UID: \"e6530a74-e339-4deb-b947-5fbc702308e3\") " pod="openshift-marketplace/redhat-operators-5smsq" Oct 08 16:48:11 crc kubenswrapper[4624]: I1008 16:48:11.864999 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6530a74-e339-4deb-b947-5fbc702308e3-catalog-content\") pod \"redhat-operators-5smsq\" (UID: \"e6530a74-e339-4deb-b947-5fbc702308e3\") " pod="openshift-marketplace/redhat-operators-5smsq" Oct 08 16:48:11 crc kubenswrapper[4624]: I1008 16:48:11.865135 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b2k5\" (UniqueName: \"kubernetes.io/projected/e6530a74-e339-4deb-b947-5fbc702308e3-kube-api-access-5b2k5\") pod \"redhat-operators-5smsq\" (UID: \"e6530a74-e339-4deb-b947-5fbc702308e3\") " pod="openshift-marketplace/redhat-operators-5smsq" Oct 08 16:48:11 crc kubenswrapper[4624]: I1008 16:48:11.967305 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b2k5\" (UniqueName: \"kubernetes.io/projected/e6530a74-e339-4deb-b947-5fbc702308e3-kube-api-access-5b2k5\") pod \"redhat-operators-5smsq\" (UID: \"e6530a74-e339-4deb-b947-5fbc702308e3\") " pod="openshift-marketplace/redhat-operators-5smsq" Oct 08 16:48:11 crc kubenswrapper[4624]: I1008 16:48:11.967433 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6530a74-e339-4deb-b947-5fbc702308e3-utilities\") pod \"redhat-operators-5smsq\" (UID: \"e6530a74-e339-4deb-b947-5fbc702308e3\") " pod="openshift-marketplace/redhat-operators-5smsq" Oct 08 16:48:11 crc kubenswrapper[4624]: I1008 16:48:11.967478 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6530a74-e339-4deb-b947-5fbc702308e3-catalog-content\") pod \"redhat-operators-5smsq\" (UID: \"e6530a74-e339-4deb-b947-5fbc702308e3\") " pod="openshift-marketplace/redhat-operators-5smsq" Oct 08 16:48:11 crc kubenswrapper[4624]: I1008 16:48:11.968100 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6530a74-e339-4deb-b947-5fbc702308e3-utilities\") pod \"redhat-operators-5smsq\" (UID: \"e6530a74-e339-4deb-b947-5fbc702308e3\") " pod="openshift-marketplace/redhat-operators-5smsq" Oct 08 16:48:11 crc kubenswrapper[4624]: I1008 16:48:11.968124 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6530a74-e339-4deb-b947-5fbc702308e3-catalog-content\") pod \"redhat-operators-5smsq\" (UID: \"e6530a74-e339-4deb-b947-5fbc702308e3\") " pod="openshift-marketplace/redhat-operators-5smsq" Oct 08 16:48:11 crc kubenswrapper[4624]: I1008 16:48:11.990972 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b2k5\" (UniqueName: \"kubernetes.io/projected/e6530a74-e339-4deb-b947-5fbc702308e3-kube-api-access-5b2k5\") pod \"redhat-operators-5smsq\" (UID: \"e6530a74-e339-4deb-b947-5fbc702308e3\") " pod="openshift-marketplace/redhat-operators-5smsq" Oct 08 16:48:12 crc kubenswrapper[4624]: I1008 16:48:12.093069 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5smsq" Oct 08 16:48:12 crc kubenswrapper[4624]: I1008 16:48:12.626926 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5smsq"] Oct 08 16:48:13 crc kubenswrapper[4624]: I1008 16:48:13.465755 4624 generic.go:334] "Generic (PLEG): container finished" podID="e6530a74-e339-4deb-b947-5fbc702308e3" containerID="0c24c8d1dcef566c6e6583280698dc3a361be16310ec4ea6592cbab8f20e70c5" exitCode=0 Oct 08 16:48:13 crc kubenswrapper[4624]: I1008 16:48:13.483372 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5smsq" event={"ID":"e6530a74-e339-4deb-b947-5fbc702308e3","Type":"ContainerDied","Data":"0c24c8d1dcef566c6e6583280698dc3a361be16310ec4ea6592cbab8f20e70c5"} Oct 08 16:48:13 crc kubenswrapper[4624]: I1008 16:48:13.483426 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5smsq" event={"ID":"e6530a74-e339-4deb-b947-5fbc702308e3","Type":"ContainerStarted","Data":"f52e762368a8a23a5df924cbb824a8d4da39e2a9da2122c0356291a54226dabe"} Oct 08 16:48:15 crc kubenswrapper[4624]: I1008 16:48:15.484978 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5smsq" event={"ID":"e6530a74-e339-4deb-b947-5fbc702308e3","Type":"ContainerStarted","Data":"6fccef3bb3b9acd8d9ba477e719a16e09ccfa8abeab2edc23f65b625dab64ea0"} Oct 08 16:48:17 crc kubenswrapper[4624]: I1008 16:48:17.467511 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:48:17 crc kubenswrapper[4624]: E1008 16:48:17.468139 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:48:19 crc kubenswrapper[4624]: I1008 16:48:19.522953 4624 generic.go:334] "Generic (PLEG): container finished" podID="e6530a74-e339-4deb-b947-5fbc702308e3" containerID="6fccef3bb3b9acd8d9ba477e719a16e09ccfa8abeab2edc23f65b625dab64ea0" exitCode=0 Oct 08 16:48:19 crc kubenswrapper[4624]: I1008 16:48:19.523010 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5smsq" event={"ID":"e6530a74-e339-4deb-b947-5fbc702308e3","Type":"ContainerDied","Data":"6fccef3bb3b9acd8d9ba477e719a16e09ccfa8abeab2edc23f65b625dab64ea0"} Oct 08 16:48:20 crc kubenswrapper[4624]: I1008 16:48:20.534857 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5smsq" event={"ID":"e6530a74-e339-4deb-b947-5fbc702308e3","Type":"ContainerStarted","Data":"af2cea4ea22d6e87adf933cac365f472ce32fba678989540bfe84081e26feef3"} Oct 08 16:48:20 crc kubenswrapper[4624]: I1008 16:48:20.559288 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5smsq" podStartSLOduration=3.055331001 podStartE2EDuration="9.559266473s" podCreationTimestamp="2025-10-08 16:48:11 +0000 UTC" firstStartedPulling="2025-10-08 16:48:13.469034681 +0000 UTC m=+8718.619969768" lastFinishedPulling="2025-10-08 16:48:19.972970163 +0000 UTC m=+8725.123905240" observedRunningTime="2025-10-08 16:48:20.554896041 +0000 UTC m=+8725.705831118" watchObservedRunningTime="2025-10-08 16:48:20.559266473 +0000 UTC m=+8725.710201550" Oct 08 16:48:22 crc kubenswrapper[4624]: I1008 16:48:22.093196 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5smsq" Oct 08 16:48:22 crc kubenswrapper[4624]: I1008 16:48:22.094589 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5smsq" Oct 08 16:48:23 crc kubenswrapper[4624]: I1008 16:48:23.151281 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5smsq" podUID="e6530a74-e339-4deb-b947-5fbc702308e3" containerName="registry-server" probeResult="failure" output=< Oct 08 16:48:23 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:48:23 crc kubenswrapper[4624]: > Oct 08 16:48:31 crc kubenswrapper[4624]: I1008 16:48:31.466328 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:48:31 crc kubenswrapper[4624]: E1008 16:48:31.467380 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:48:33 crc kubenswrapper[4624]: I1008 16:48:33.166744 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5smsq" podUID="e6530a74-e339-4deb-b947-5fbc702308e3" containerName="registry-server" probeResult="failure" output=< Oct 08 16:48:33 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:48:33 crc kubenswrapper[4624]: > Oct 08 16:48:42 crc kubenswrapper[4624]: I1008 16:48:42.465862 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:48:42 crc kubenswrapper[4624]: E1008 16:48:42.468118 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:48:43 crc kubenswrapper[4624]: I1008 16:48:43.170716 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5smsq" podUID="e6530a74-e339-4deb-b947-5fbc702308e3" containerName="registry-server" probeResult="failure" output=< Oct 08 16:48:43 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:48:43 crc kubenswrapper[4624]: > Oct 08 16:48:52 crc kubenswrapper[4624]: I1008 16:48:52.157997 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5smsq" Oct 08 16:48:52 crc kubenswrapper[4624]: I1008 16:48:52.217620 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5smsq" Oct 08 16:48:52 crc kubenswrapper[4624]: I1008 16:48:52.409033 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5smsq"] Oct 08 16:48:53 crc kubenswrapper[4624]: I1008 16:48:53.848806 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5smsq" podUID="e6530a74-e339-4deb-b947-5fbc702308e3" containerName="registry-server" containerID="cri-o://af2cea4ea22d6e87adf933cac365f472ce32fba678989540bfe84081e26feef3" gracePeriod=2 Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.465878 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:48:54 crc kubenswrapper[4624]: E1008 16:48:54.466585 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.564895 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5smsq" Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.699277 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6530a74-e339-4deb-b947-5fbc702308e3-utilities\") pod \"e6530a74-e339-4deb-b947-5fbc702308e3\" (UID: \"e6530a74-e339-4deb-b947-5fbc702308e3\") " Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.699527 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6530a74-e339-4deb-b947-5fbc702308e3-catalog-content\") pod \"e6530a74-e339-4deb-b947-5fbc702308e3\" (UID: \"e6530a74-e339-4deb-b947-5fbc702308e3\") " Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.699678 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b2k5\" (UniqueName: \"kubernetes.io/projected/e6530a74-e339-4deb-b947-5fbc702308e3-kube-api-access-5b2k5\") pod \"e6530a74-e339-4deb-b947-5fbc702308e3\" (UID: \"e6530a74-e339-4deb-b947-5fbc702308e3\") " Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.700386 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6530a74-e339-4deb-b947-5fbc702308e3-utilities" (OuterVolumeSpecName: "utilities") pod "e6530a74-e339-4deb-b947-5fbc702308e3" (UID: "e6530a74-e339-4deb-b947-5fbc702308e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.700809 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6530a74-e339-4deb-b947-5fbc702308e3-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.709697 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6530a74-e339-4deb-b947-5fbc702308e3-kube-api-access-5b2k5" (OuterVolumeSpecName: "kube-api-access-5b2k5") pod "e6530a74-e339-4deb-b947-5fbc702308e3" (UID: "e6530a74-e339-4deb-b947-5fbc702308e3"). InnerVolumeSpecName "kube-api-access-5b2k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.801328 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6530a74-e339-4deb-b947-5fbc702308e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6530a74-e339-4deb-b947-5fbc702308e3" (UID: "e6530a74-e339-4deb-b947-5fbc702308e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.802299 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6530a74-e339-4deb-b947-5fbc702308e3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.802333 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b2k5\" (UniqueName: \"kubernetes.io/projected/e6530a74-e339-4deb-b947-5fbc702308e3-kube-api-access-5b2k5\") on node \"crc\" DevicePath \"\"" Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.859734 4624 generic.go:334] "Generic (PLEG): container finished" podID="e6530a74-e339-4deb-b947-5fbc702308e3" containerID="af2cea4ea22d6e87adf933cac365f472ce32fba678989540bfe84081e26feef3" exitCode=0 Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.859766 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5smsq" event={"ID":"e6530a74-e339-4deb-b947-5fbc702308e3","Type":"ContainerDied","Data":"af2cea4ea22d6e87adf933cac365f472ce32fba678989540bfe84081e26feef3"} Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.859820 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5smsq" event={"ID":"e6530a74-e339-4deb-b947-5fbc702308e3","Type":"ContainerDied","Data":"f52e762368a8a23a5df924cbb824a8d4da39e2a9da2122c0356291a54226dabe"} Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.859839 4624 scope.go:117] "RemoveContainer" containerID="af2cea4ea22d6e87adf933cac365f472ce32fba678989540bfe84081e26feef3" Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.859865 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5smsq" Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.895731 4624 scope.go:117] "RemoveContainer" containerID="6fccef3bb3b9acd8d9ba477e719a16e09ccfa8abeab2edc23f65b625dab64ea0" Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.896939 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5smsq"] Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.907093 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5smsq"] Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.937379 4624 scope.go:117] "RemoveContainer" containerID="0c24c8d1dcef566c6e6583280698dc3a361be16310ec4ea6592cbab8f20e70c5" Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.971998 4624 scope.go:117] "RemoveContainer" containerID="af2cea4ea22d6e87adf933cac365f472ce32fba678989540bfe84081e26feef3" Oct 08 16:48:54 crc kubenswrapper[4624]: E1008 16:48:54.972601 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af2cea4ea22d6e87adf933cac365f472ce32fba678989540bfe84081e26feef3\": container with ID starting with af2cea4ea22d6e87adf933cac365f472ce32fba678989540bfe84081e26feef3 not found: ID does not exist" containerID="af2cea4ea22d6e87adf933cac365f472ce32fba678989540bfe84081e26feef3" Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.972665 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af2cea4ea22d6e87adf933cac365f472ce32fba678989540bfe84081e26feef3"} err="failed to get container status \"af2cea4ea22d6e87adf933cac365f472ce32fba678989540bfe84081e26feef3\": rpc error: code = NotFound desc = could not find container \"af2cea4ea22d6e87adf933cac365f472ce32fba678989540bfe84081e26feef3\": container with ID starting with af2cea4ea22d6e87adf933cac365f472ce32fba678989540bfe84081e26feef3 not found: ID does not exist" Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.972691 4624 scope.go:117] "RemoveContainer" containerID="6fccef3bb3b9acd8d9ba477e719a16e09ccfa8abeab2edc23f65b625dab64ea0" Oct 08 16:48:54 crc kubenswrapper[4624]: E1008 16:48:54.973126 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fccef3bb3b9acd8d9ba477e719a16e09ccfa8abeab2edc23f65b625dab64ea0\": container with ID starting with 6fccef3bb3b9acd8d9ba477e719a16e09ccfa8abeab2edc23f65b625dab64ea0 not found: ID does not exist" containerID="6fccef3bb3b9acd8d9ba477e719a16e09ccfa8abeab2edc23f65b625dab64ea0" Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.973183 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fccef3bb3b9acd8d9ba477e719a16e09ccfa8abeab2edc23f65b625dab64ea0"} err="failed to get container status \"6fccef3bb3b9acd8d9ba477e719a16e09ccfa8abeab2edc23f65b625dab64ea0\": rpc error: code = NotFound desc = could not find container \"6fccef3bb3b9acd8d9ba477e719a16e09ccfa8abeab2edc23f65b625dab64ea0\": container with ID starting with 6fccef3bb3b9acd8d9ba477e719a16e09ccfa8abeab2edc23f65b625dab64ea0 not found: ID does not exist" Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.973219 4624 scope.go:117] "RemoveContainer" containerID="0c24c8d1dcef566c6e6583280698dc3a361be16310ec4ea6592cbab8f20e70c5" Oct 08 16:48:54 crc kubenswrapper[4624]: E1008 16:48:54.973615 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c24c8d1dcef566c6e6583280698dc3a361be16310ec4ea6592cbab8f20e70c5\": container with ID starting with 0c24c8d1dcef566c6e6583280698dc3a361be16310ec4ea6592cbab8f20e70c5 not found: ID does not exist" containerID="0c24c8d1dcef566c6e6583280698dc3a361be16310ec4ea6592cbab8f20e70c5" Oct 08 16:48:54 crc kubenswrapper[4624]: I1008 16:48:54.973675 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c24c8d1dcef566c6e6583280698dc3a361be16310ec4ea6592cbab8f20e70c5"} err="failed to get container status \"0c24c8d1dcef566c6e6583280698dc3a361be16310ec4ea6592cbab8f20e70c5\": rpc error: code = NotFound desc = could not find container \"0c24c8d1dcef566c6e6583280698dc3a361be16310ec4ea6592cbab8f20e70c5\": container with ID starting with 0c24c8d1dcef566c6e6583280698dc3a361be16310ec4ea6592cbab8f20e70c5 not found: ID does not exist" Oct 08 16:48:55 crc kubenswrapper[4624]: I1008 16:48:55.479811 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6530a74-e339-4deb-b947-5fbc702308e3" path="/var/lib/kubelet/pods/e6530a74-e339-4deb-b947-5fbc702308e3/volumes" Oct 08 16:49:05 crc kubenswrapper[4624]: I1008 16:49:05.473967 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:49:05 crc kubenswrapper[4624]: E1008 16:49:05.475977 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:49:19 crc kubenswrapper[4624]: I1008 16:49:19.466478 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:49:19 crc kubenswrapper[4624]: E1008 16:49:19.467943 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:49:33 crc kubenswrapper[4624]: I1008 16:49:33.466239 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:49:34 crc kubenswrapper[4624]: I1008 16:49:34.241734 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"50ee5258160198d5016c84c2f3e1f3e20456097dc0bfff45720a9b16ffefe68b"} Oct 08 16:52:00 crc kubenswrapper[4624]: I1008 16:52:00.077178 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:52:00 crc kubenswrapper[4624]: I1008 16:52:00.077813 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:52:30 crc kubenswrapper[4624]: I1008 16:52:30.076401 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:52:30 crc kubenswrapper[4624]: I1008 16:52:30.077140 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:53:00 crc kubenswrapper[4624]: I1008 16:53:00.076506 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:53:00 crc kubenswrapper[4624]: I1008 16:53:00.077264 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:53:00 crc kubenswrapper[4624]: I1008 16:53:00.077346 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 16:53:00 crc kubenswrapper[4624]: I1008 16:53:00.078529 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50ee5258160198d5016c84c2f3e1f3e20456097dc0bfff45720a9b16ffefe68b"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 16:53:00 crc kubenswrapper[4624]: I1008 16:53:00.078632 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://50ee5258160198d5016c84c2f3e1f3e20456097dc0bfff45720a9b16ffefe68b" gracePeriod=600 Oct 08 16:53:00 crc kubenswrapper[4624]: I1008 16:53:00.319130 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="50ee5258160198d5016c84c2f3e1f3e20456097dc0bfff45720a9b16ffefe68b" exitCode=0 Oct 08 16:53:00 crc kubenswrapper[4624]: I1008 16:53:00.319274 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"50ee5258160198d5016c84c2f3e1f3e20456097dc0bfff45720a9b16ffefe68b"} Oct 08 16:53:00 crc kubenswrapper[4624]: I1008 16:53:00.319713 4624 scope.go:117] "RemoveContainer" containerID="2e42cc68a592312703b47256f895ac6210921f467c04edaca19035c8395da86f" Oct 08 16:53:01 crc kubenswrapper[4624]: I1008 16:53:01.335410 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f"} Oct 08 16:53:01 crc kubenswrapper[4624]: I1008 16:53:01.990877 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-59d86bf959-vq2ld" podUID="d7c08e42-5aca-4394-952c-5649ba096a8f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.497422 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-857d49cc6c-6fh82"] Oct 08 16:54:46 crc kubenswrapper[4624]: E1008 16:54:46.498366 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6530a74-e339-4deb-b947-5fbc702308e3" containerName="registry-server" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.498382 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6530a74-e339-4deb-b947-5fbc702308e3" containerName="registry-server" Oct 08 16:54:46 crc kubenswrapper[4624]: E1008 16:54:46.498415 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6530a74-e339-4deb-b947-5fbc702308e3" containerName="extract-content" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.498421 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6530a74-e339-4deb-b947-5fbc702308e3" containerName="extract-content" Oct 08 16:54:46 crc kubenswrapper[4624]: E1008 16:54:46.498428 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6530a74-e339-4deb-b947-5fbc702308e3" containerName="extract-utilities" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.498434 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6530a74-e339-4deb-b947-5fbc702308e3" containerName="extract-utilities" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.498661 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6530a74-e339-4deb-b947-5fbc702308e3" containerName="registry-server" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.499769 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.520808 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba09f7ef-9520-4917-9fb7-642e8fb51be1-config\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.522039 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba09f7ef-9520-4917-9fb7-642e8fb51be1-ovndb-tls-certs\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.522223 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spzk6\" (UniqueName: \"kubernetes.io/projected/ba09f7ef-9520-4917-9fb7-642e8fb51be1-kube-api-access-spzk6\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.522458 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba09f7ef-9520-4917-9fb7-642e8fb51be1-public-tls-certs\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.522574 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba09f7ef-9520-4917-9fb7-642e8fb51be1-internal-tls-certs\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.522726 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba09f7ef-9520-4917-9fb7-642e8fb51be1-httpd-config\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.522753 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba09f7ef-9520-4917-9fb7-642e8fb51be1-combined-ca-bundle\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.594333 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-857d49cc6c-6fh82"] Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.626283 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba09f7ef-9520-4917-9fb7-642e8fb51be1-public-tls-certs\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.626537 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba09f7ef-9520-4917-9fb7-642e8fb51be1-internal-tls-certs\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.626647 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba09f7ef-9520-4917-9fb7-642e8fb51be1-httpd-config\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.626732 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba09f7ef-9520-4917-9fb7-642e8fb51be1-combined-ca-bundle\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.626863 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba09f7ef-9520-4917-9fb7-642e8fb51be1-config\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.626945 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba09f7ef-9520-4917-9fb7-642e8fb51be1-ovndb-tls-certs\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.627042 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spzk6\" (UniqueName: \"kubernetes.io/projected/ba09f7ef-9520-4917-9fb7-642e8fb51be1-kube-api-access-spzk6\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.634613 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba09f7ef-9520-4917-9fb7-642e8fb51be1-public-tls-certs\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.634648 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba09f7ef-9520-4917-9fb7-642e8fb51be1-combined-ca-bundle\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.638845 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba09f7ef-9520-4917-9fb7-642e8fb51be1-ovndb-tls-certs\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.640521 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba09f7ef-9520-4917-9fb7-642e8fb51be1-internal-tls-certs\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.640652 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba09f7ef-9520-4917-9fb7-642e8fb51be1-config\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.644255 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba09f7ef-9520-4917-9fb7-642e8fb51be1-httpd-config\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.655252 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spzk6\" (UniqueName: \"kubernetes.io/projected/ba09f7ef-9520-4917-9fb7-642e8fb51be1-kube-api-access-spzk6\") pod \"neutron-857d49cc6c-6fh82\" (UID: \"ba09f7ef-9520-4917-9fb7-642e8fb51be1\") " pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:46 crc kubenswrapper[4624]: I1008 16:54:46.831381 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:47 crc kubenswrapper[4624]: I1008 16:54:47.910484 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-857d49cc6c-6fh82"] Oct 08 16:54:48 crc kubenswrapper[4624]: I1008 16:54:48.240027 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k4qg2"] Oct 08 16:54:48 crc kubenswrapper[4624]: I1008 16:54:48.242670 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k4qg2" Oct 08 16:54:48 crc kubenswrapper[4624]: I1008 16:54:48.274655 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k4qg2"] Oct 08 16:54:48 crc kubenswrapper[4624]: I1008 16:54:48.369138 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9367857-bea8-4ea3-82e4-f8be6b794087-utilities\") pod \"community-operators-k4qg2\" (UID: \"d9367857-bea8-4ea3-82e4-f8be6b794087\") " pod="openshift-marketplace/community-operators-k4qg2" Oct 08 16:54:48 crc kubenswrapper[4624]: I1008 16:54:48.369235 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9367857-bea8-4ea3-82e4-f8be6b794087-catalog-content\") pod \"community-operators-k4qg2\" (UID: \"d9367857-bea8-4ea3-82e4-f8be6b794087\") " pod="openshift-marketplace/community-operators-k4qg2" Oct 08 16:54:48 crc kubenswrapper[4624]: I1008 16:54:48.369346 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvgrb\" (UniqueName: \"kubernetes.io/projected/d9367857-bea8-4ea3-82e4-f8be6b794087-kube-api-access-gvgrb\") pod \"community-operators-k4qg2\" (UID: \"d9367857-bea8-4ea3-82e4-f8be6b794087\") " pod="openshift-marketplace/community-operators-k4qg2" Oct 08 16:54:48 crc kubenswrapper[4624]: I1008 16:54:48.377196 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-857d49cc6c-6fh82" event={"ID":"ba09f7ef-9520-4917-9fb7-642e8fb51be1","Type":"ContainerStarted","Data":"afb67c68331d4bd9535d1f035a37b54637e8606c85b086c20d0f5506cb4a3e5b"} Oct 08 16:54:48 crc kubenswrapper[4624]: I1008 16:54:48.471235 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvgrb\" (UniqueName: \"kubernetes.io/projected/d9367857-bea8-4ea3-82e4-f8be6b794087-kube-api-access-gvgrb\") pod \"community-operators-k4qg2\" (UID: \"d9367857-bea8-4ea3-82e4-f8be6b794087\") " pod="openshift-marketplace/community-operators-k4qg2" Oct 08 16:54:48 crc kubenswrapper[4624]: I1008 16:54:48.471356 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9367857-bea8-4ea3-82e4-f8be6b794087-utilities\") pod \"community-operators-k4qg2\" (UID: \"d9367857-bea8-4ea3-82e4-f8be6b794087\") " pod="openshift-marketplace/community-operators-k4qg2" Oct 08 16:54:48 crc kubenswrapper[4624]: I1008 16:54:48.471426 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9367857-bea8-4ea3-82e4-f8be6b794087-catalog-content\") pod \"community-operators-k4qg2\" (UID: \"d9367857-bea8-4ea3-82e4-f8be6b794087\") " pod="openshift-marketplace/community-operators-k4qg2" Oct 08 16:54:48 crc kubenswrapper[4624]: I1008 16:54:48.471949 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9367857-bea8-4ea3-82e4-f8be6b794087-catalog-content\") pod \"community-operators-k4qg2\" (UID: \"d9367857-bea8-4ea3-82e4-f8be6b794087\") " pod="openshift-marketplace/community-operators-k4qg2" Oct 08 16:54:48 crc kubenswrapper[4624]: I1008 16:54:48.472612 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9367857-bea8-4ea3-82e4-f8be6b794087-utilities\") pod \"community-operators-k4qg2\" (UID: \"d9367857-bea8-4ea3-82e4-f8be6b794087\") " pod="openshift-marketplace/community-operators-k4qg2" Oct 08 16:54:48 crc kubenswrapper[4624]: I1008 16:54:48.488437 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvgrb\" (UniqueName: \"kubernetes.io/projected/d9367857-bea8-4ea3-82e4-f8be6b794087-kube-api-access-gvgrb\") pod \"community-operators-k4qg2\" (UID: \"d9367857-bea8-4ea3-82e4-f8be6b794087\") " pod="openshift-marketplace/community-operators-k4qg2" Oct 08 16:54:48 crc kubenswrapper[4624]: I1008 16:54:48.566326 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k4qg2" Oct 08 16:54:49 crc kubenswrapper[4624]: I1008 16:54:49.302065 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k4qg2"] Oct 08 16:54:49 crc kubenswrapper[4624]: W1008 16:54:49.322496 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9367857_bea8_4ea3_82e4_f8be6b794087.slice/crio-1866c8b6b3e94042dbef34a457dfc02ab5613fa73b7a808a569462956fd4b2f2 WatchSource:0}: Error finding container 1866c8b6b3e94042dbef34a457dfc02ab5613fa73b7a808a569462956fd4b2f2: Status 404 returned error can't find the container with id 1866c8b6b3e94042dbef34a457dfc02ab5613fa73b7a808a569462956fd4b2f2 Oct 08 16:54:49 crc kubenswrapper[4624]: I1008 16:54:49.419293 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-857d49cc6c-6fh82" event={"ID":"ba09f7ef-9520-4917-9fb7-642e8fb51be1","Type":"ContainerStarted","Data":"70893c14d9ec06c3c81b91d7e1c698d0a260a8beda0bd0f8a9a8c400094d92ea"} Oct 08 16:54:49 crc kubenswrapper[4624]: I1008 16:54:49.422150 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-857d49cc6c-6fh82" event={"ID":"ba09f7ef-9520-4917-9fb7-642e8fb51be1","Type":"ContainerStarted","Data":"ef043ce3430a2d871f4d36c5b05a455f2897ce6431fe88cd302f4a68809d73dc"} Oct 08 16:54:49 crc kubenswrapper[4624]: I1008 16:54:49.424047 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:54:49 crc kubenswrapper[4624]: I1008 16:54:49.426442 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4qg2" event={"ID":"d9367857-bea8-4ea3-82e4-f8be6b794087","Type":"ContainerStarted","Data":"1866c8b6b3e94042dbef34a457dfc02ab5613fa73b7a808a569462956fd4b2f2"} Oct 08 16:54:49 crc kubenswrapper[4624]: I1008 16:54:49.466344 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-857d49cc6c-6fh82" podStartSLOduration=3.466310162 podStartE2EDuration="3.466310162s" podCreationTimestamp="2025-10-08 16:54:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 16:54:49.456991913 +0000 UTC m=+9114.607927000" watchObservedRunningTime="2025-10-08 16:54:49.466310162 +0000 UTC m=+9114.617245239" Oct 08 16:54:50 crc kubenswrapper[4624]: I1008 16:54:50.437490 4624 generic.go:334] "Generic (PLEG): container finished" podID="d9367857-bea8-4ea3-82e4-f8be6b794087" containerID="f0bd8f24a20beeec9c99d30c12074248f4dfa2910e0ee491e46994de668017b2" exitCode=0 Oct 08 16:54:50 crc kubenswrapper[4624]: I1008 16:54:50.437588 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4qg2" event={"ID":"d9367857-bea8-4ea3-82e4-f8be6b794087","Type":"ContainerDied","Data":"f0bd8f24a20beeec9c99d30c12074248f4dfa2910e0ee491e46994de668017b2"} Oct 08 16:54:50 crc kubenswrapper[4624]: I1008 16:54:50.440615 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 16:54:52 crc kubenswrapper[4624]: I1008 16:54:52.464533 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4qg2" event={"ID":"d9367857-bea8-4ea3-82e4-f8be6b794087","Type":"ContainerStarted","Data":"947f57e94275f27cd26c2faef8125e880de59a99fe24feddc784bdaa985f7e59"} Oct 08 16:54:53 crc kubenswrapper[4624]: I1008 16:54:53.480040 4624 generic.go:334] "Generic (PLEG): container finished" podID="d9367857-bea8-4ea3-82e4-f8be6b794087" containerID="947f57e94275f27cd26c2faef8125e880de59a99fe24feddc784bdaa985f7e59" exitCode=0 Oct 08 16:54:53 crc kubenswrapper[4624]: I1008 16:54:53.482941 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4qg2" event={"ID":"d9367857-bea8-4ea3-82e4-f8be6b794087","Type":"ContainerDied","Data":"947f57e94275f27cd26c2faef8125e880de59a99fe24feddc784bdaa985f7e59"} Oct 08 16:54:54 crc kubenswrapper[4624]: I1008 16:54:54.492919 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4qg2" event={"ID":"d9367857-bea8-4ea3-82e4-f8be6b794087","Type":"ContainerStarted","Data":"8ebb3f9e2de6c38c5c6538020f6c4700fd32128b8968df62135597b69daac859"} Oct 08 16:54:54 crc kubenswrapper[4624]: I1008 16:54:54.511227 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k4qg2" podStartSLOduration=3.049714567 podStartE2EDuration="6.511208571s" podCreationTimestamp="2025-10-08 16:54:48 +0000 UTC" firstStartedPulling="2025-10-08 16:54:50.439804491 +0000 UTC m=+9115.590739568" lastFinishedPulling="2025-10-08 16:54:53.901298485 +0000 UTC m=+9119.052233572" observedRunningTime="2025-10-08 16:54:54.510340019 +0000 UTC m=+9119.661275116" watchObservedRunningTime="2025-10-08 16:54:54.511208571 +0000 UTC m=+9119.662143638" Oct 08 16:54:58 crc kubenswrapper[4624]: I1008 16:54:58.567219 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k4qg2" Oct 08 16:54:58 crc kubenswrapper[4624]: I1008 16:54:58.567764 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k4qg2" Oct 08 16:54:59 crc kubenswrapper[4624]: I1008 16:54:59.627558 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-k4qg2" podUID="d9367857-bea8-4ea3-82e4-f8be6b794087" containerName="registry-server" probeResult="failure" output=< Oct 08 16:54:59 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:54:59 crc kubenswrapper[4624]: > Oct 08 16:55:00 crc kubenswrapper[4624]: I1008 16:55:00.076282 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:55:00 crc kubenswrapper[4624]: I1008 16:55:00.076847 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:55:08 crc kubenswrapper[4624]: I1008 16:55:08.631315 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k4qg2" Oct 08 16:55:08 crc kubenswrapper[4624]: I1008 16:55:08.695698 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k4qg2" Oct 08 16:55:08 crc kubenswrapper[4624]: I1008 16:55:08.866239 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k4qg2"] Oct 08 16:55:10 crc kubenswrapper[4624]: I1008 16:55:10.643833 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k4qg2" podUID="d9367857-bea8-4ea3-82e4-f8be6b794087" containerName="registry-server" containerID="cri-o://8ebb3f9e2de6c38c5c6538020f6c4700fd32128b8968df62135597b69daac859" gracePeriod=2 Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.486957 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k4qg2" Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.549262 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvgrb\" (UniqueName: \"kubernetes.io/projected/d9367857-bea8-4ea3-82e4-f8be6b794087-kube-api-access-gvgrb\") pod \"d9367857-bea8-4ea3-82e4-f8be6b794087\" (UID: \"d9367857-bea8-4ea3-82e4-f8be6b794087\") " Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.549534 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9367857-bea8-4ea3-82e4-f8be6b794087-utilities\") pod \"d9367857-bea8-4ea3-82e4-f8be6b794087\" (UID: \"d9367857-bea8-4ea3-82e4-f8be6b794087\") " Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.549610 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9367857-bea8-4ea3-82e4-f8be6b794087-catalog-content\") pod \"d9367857-bea8-4ea3-82e4-f8be6b794087\" (UID: \"d9367857-bea8-4ea3-82e4-f8be6b794087\") " Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.551339 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9367857-bea8-4ea3-82e4-f8be6b794087-utilities" (OuterVolumeSpecName: "utilities") pod "d9367857-bea8-4ea3-82e4-f8be6b794087" (UID: "d9367857-bea8-4ea3-82e4-f8be6b794087"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.605423 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9367857-bea8-4ea3-82e4-f8be6b794087-kube-api-access-gvgrb" (OuterVolumeSpecName: "kube-api-access-gvgrb") pod "d9367857-bea8-4ea3-82e4-f8be6b794087" (UID: "d9367857-bea8-4ea3-82e4-f8be6b794087"). InnerVolumeSpecName "kube-api-access-gvgrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.624865 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9367857-bea8-4ea3-82e4-f8be6b794087-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9367857-bea8-4ea3-82e4-f8be6b794087" (UID: "d9367857-bea8-4ea3-82e4-f8be6b794087"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.653801 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9367857-bea8-4ea3-82e4-f8be6b794087-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.653839 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9367857-bea8-4ea3-82e4-f8be6b794087-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.653854 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvgrb\" (UniqueName: \"kubernetes.io/projected/d9367857-bea8-4ea3-82e4-f8be6b794087-kube-api-access-gvgrb\") on node \"crc\" DevicePath \"\"" Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.659974 4624 generic.go:334] "Generic (PLEG): container finished" podID="d9367857-bea8-4ea3-82e4-f8be6b794087" containerID="8ebb3f9e2de6c38c5c6538020f6c4700fd32128b8968df62135597b69daac859" exitCode=0 Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.660022 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4qg2" event={"ID":"d9367857-bea8-4ea3-82e4-f8be6b794087","Type":"ContainerDied","Data":"8ebb3f9e2de6c38c5c6538020f6c4700fd32128b8968df62135597b69daac859"} Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.660056 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4qg2" event={"ID":"d9367857-bea8-4ea3-82e4-f8be6b794087","Type":"ContainerDied","Data":"1866c8b6b3e94042dbef34a457dfc02ab5613fa73b7a808a569462956fd4b2f2"} Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.660082 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k4qg2" Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.661085 4624 scope.go:117] "RemoveContainer" containerID="8ebb3f9e2de6c38c5c6538020f6c4700fd32128b8968df62135597b69daac859" Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.699143 4624 scope.go:117] "RemoveContainer" containerID="947f57e94275f27cd26c2faef8125e880de59a99fe24feddc784bdaa985f7e59" Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.700312 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k4qg2"] Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.717340 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k4qg2"] Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.722671 4624 scope.go:117] "RemoveContainer" containerID="f0bd8f24a20beeec9c99d30c12074248f4dfa2910e0ee491e46994de668017b2" Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.769875 4624 scope.go:117] "RemoveContainer" containerID="8ebb3f9e2de6c38c5c6538020f6c4700fd32128b8968df62135597b69daac859" Oct 08 16:55:11 crc kubenswrapper[4624]: E1008 16:55:11.770547 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ebb3f9e2de6c38c5c6538020f6c4700fd32128b8968df62135597b69daac859\": container with ID starting with 8ebb3f9e2de6c38c5c6538020f6c4700fd32128b8968df62135597b69daac859 not found: ID does not exist" containerID="8ebb3f9e2de6c38c5c6538020f6c4700fd32128b8968df62135597b69daac859" Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.770588 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ebb3f9e2de6c38c5c6538020f6c4700fd32128b8968df62135597b69daac859"} err="failed to get container status \"8ebb3f9e2de6c38c5c6538020f6c4700fd32128b8968df62135597b69daac859\": rpc error: code = NotFound desc = could not find container \"8ebb3f9e2de6c38c5c6538020f6c4700fd32128b8968df62135597b69daac859\": container with ID starting with 8ebb3f9e2de6c38c5c6538020f6c4700fd32128b8968df62135597b69daac859 not found: ID does not exist" Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.770618 4624 scope.go:117] "RemoveContainer" containerID="947f57e94275f27cd26c2faef8125e880de59a99fe24feddc784bdaa985f7e59" Oct 08 16:55:11 crc kubenswrapper[4624]: E1008 16:55:11.771085 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"947f57e94275f27cd26c2faef8125e880de59a99fe24feddc784bdaa985f7e59\": container with ID starting with 947f57e94275f27cd26c2faef8125e880de59a99fe24feddc784bdaa985f7e59 not found: ID does not exist" containerID="947f57e94275f27cd26c2faef8125e880de59a99fe24feddc784bdaa985f7e59" Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.771119 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"947f57e94275f27cd26c2faef8125e880de59a99fe24feddc784bdaa985f7e59"} err="failed to get container status \"947f57e94275f27cd26c2faef8125e880de59a99fe24feddc784bdaa985f7e59\": rpc error: code = NotFound desc = could not find container \"947f57e94275f27cd26c2faef8125e880de59a99fe24feddc784bdaa985f7e59\": container with ID starting with 947f57e94275f27cd26c2faef8125e880de59a99fe24feddc784bdaa985f7e59 not found: ID does not exist" Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.771138 4624 scope.go:117] "RemoveContainer" containerID="f0bd8f24a20beeec9c99d30c12074248f4dfa2910e0ee491e46994de668017b2" Oct 08 16:55:11 crc kubenswrapper[4624]: E1008 16:55:11.771477 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0bd8f24a20beeec9c99d30c12074248f4dfa2910e0ee491e46994de668017b2\": container with ID starting with f0bd8f24a20beeec9c99d30c12074248f4dfa2910e0ee491e46994de668017b2 not found: ID does not exist" containerID="f0bd8f24a20beeec9c99d30c12074248f4dfa2910e0ee491e46994de668017b2" Oct 08 16:55:11 crc kubenswrapper[4624]: I1008 16:55:11.771576 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0bd8f24a20beeec9c99d30c12074248f4dfa2910e0ee491e46994de668017b2"} err="failed to get container status \"f0bd8f24a20beeec9c99d30c12074248f4dfa2910e0ee491e46994de668017b2\": rpc error: code = NotFound desc = could not find container \"f0bd8f24a20beeec9c99d30c12074248f4dfa2910e0ee491e46994de668017b2\": container with ID starting with f0bd8f24a20beeec9c99d30c12074248f4dfa2910e0ee491e46994de668017b2 not found: ID does not exist" Oct 08 16:55:13 crc kubenswrapper[4624]: I1008 16:55:13.476454 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9367857-bea8-4ea3-82e4-f8be6b794087" path="/var/lib/kubelet/pods/d9367857-bea8-4ea3-82e4-f8be6b794087/volumes" Oct 08 16:55:16 crc kubenswrapper[4624]: I1008 16:55:16.846104 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-857d49cc6c-6fh82" Oct 08 16:55:16 crc kubenswrapper[4624]: I1008 16:55:16.923227 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d887b9d4f-whfwt"] Oct 08 16:55:16 crc kubenswrapper[4624]: I1008 16:55:16.923536 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d887b9d4f-whfwt" podUID="94badb37-ba00-48fa-a728-26d834b5409c" containerName="neutron-api" containerID="cri-o://c508d08262a4c5eff925233fbeff6b8b13fb443481f2708592247698b56afd70" gracePeriod=30 Oct 08 16:55:16 crc kubenswrapper[4624]: I1008 16:55:16.924150 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d887b9d4f-whfwt" podUID="94badb37-ba00-48fa-a728-26d834b5409c" containerName="neutron-httpd" containerID="cri-o://d27619c3fec1d7e9dd07d966cc9f81b5d6f7e38ac246e8570bd520a498d1b350" gracePeriod=30 Oct 08 16:55:17 crc kubenswrapper[4624]: I1008 16:55:17.749173 4624 generic.go:334] "Generic (PLEG): container finished" podID="94badb37-ba00-48fa-a728-26d834b5409c" containerID="d27619c3fec1d7e9dd07d966cc9f81b5d6f7e38ac246e8570bd520a498d1b350" exitCode=0 Oct 08 16:55:17 crc kubenswrapper[4624]: I1008 16:55:17.749220 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d887b9d4f-whfwt" event={"ID":"94badb37-ba00-48fa-a728-26d834b5409c","Type":"ContainerDied","Data":"d27619c3fec1d7e9dd07d966cc9f81b5d6f7e38ac246e8570bd520a498d1b350"} Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.736654 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.773112 4624 generic.go:334] "Generic (PLEG): container finished" podID="94badb37-ba00-48fa-a728-26d834b5409c" containerID="c508d08262a4c5eff925233fbeff6b8b13fb443481f2708592247698b56afd70" exitCode=0 Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.773163 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d887b9d4f-whfwt" event={"ID":"94badb37-ba00-48fa-a728-26d834b5409c","Type":"ContainerDied","Data":"c508d08262a4c5eff925233fbeff6b8b13fb443481f2708592247698b56afd70"} Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.773186 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d887b9d4f-whfwt" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.773195 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d887b9d4f-whfwt" event={"ID":"94badb37-ba00-48fa-a728-26d834b5409c","Type":"ContainerDied","Data":"4331dba694671eb7b541c261c4f6e28127bd08b530ff4cffdf901a4707d05421"} Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.773231 4624 scope.go:117] "RemoveContainer" containerID="d27619c3fec1d7e9dd07d966cc9f81b5d6f7e38ac246e8570bd520a498d1b350" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.800956 4624 scope.go:117] "RemoveContainer" containerID="c508d08262a4c5eff925233fbeff6b8b13fb443481f2708592247698b56afd70" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.826898 4624 scope.go:117] "RemoveContainer" containerID="d27619c3fec1d7e9dd07d966cc9f81b5d6f7e38ac246e8570bd520a498d1b350" Oct 08 16:55:19 crc kubenswrapper[4624]: E1008 16:55:19.827387 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d27619c3fec1d7e9dd07d966cc9f81b5d6f7e38ac246e8570bd520a498d1b350\": container with ID starting with d27619c3fec1d7e9dd07d966cc9f81b5d6f7e38ac246e8570bd520a498d1b350 not found: ID does not exist" containerID="d27619c3fec1d7e9dd07d966cc9f81b5d6f7e38ac246e8570bd520a498d1b350" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.827455 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d27619c3fec1d7e9dd07d966cc9f81b5d6f7e38ac246e8570bd520a498d1b350"} err="failed to get container status \"d27619c3fec1d7e9dd07d966cc9f81b5d6f7e38ac246e8570bd520a498d1b350\": rpc error: code = NotFound desc = could not find container \"d27619c3fec1d7e9dd07d966cc9f81b5d6f7e38ac246e8570bd520a498d1b350\": container with ID starting with d27619c3fec1d7e9dd07d966cc9f81b5d6f7e38ac246e8570bd520a498d1b350 not found: ID does not exist" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.827491 4624 scope.go:117] "RemoveContainer" containerID="c508d08262a4c5eff925233fbeff6b8b13fb443481f2708592247698b56afd70" Oct 08 16:55:19 crc kubenswrapper[4624]: E1008 16:55:19.828083 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c508d08262a4c5eff925233fbeff6b8b13fb443481f2708592247698b56afd70\": container with ID starting with c508d08262a4c5eff925233fbeff6b8b13fb443481f2708592247698b56afd70 not found: ID does not exist" containerID="c508d08262a4c5eff925233fbeff6b8b13fb443481f2708592247698b56afd70" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.828120 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c508d08262a4c5eff925233fbeff6b8b13fb443481f2708592247698b56afd70"} err="failed to get container status \"c508d08262a4c5eff925233fbeff6b8b13fb443481f2708592247698b56afd70\": rpc error: code = NotFound desc = could not find container \"c508d08262a4c5eff925233fbeff6b8b13fb443481f2708592247698b56afd70\": container with ID starting with c508d08262a4c5eff925233fbeff6b8b13fb443481f2708592247698b56afd70 not found: ID does not exist" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.845392 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-config\") pod \"94badb37-ba00-48fa-a728-26d834b5409c\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.845453 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-internal-tls-certs\") pod \"94badb37-ba00-48fa-a728-26d834b5409c\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.845492 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-public-tls-certs\") pod \"94badb37-ba00-48fa-a728-26d834b5409c\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.845546 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-combined-ca-bundle\") pod \"94badb37-ba00-48fa-a728-26d834b5409c\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.845785 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-ovndb-tls-certs\") pod \"94badb37-ba00-48fa-a728-26d834b5409c\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.845906 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-httpd-config\") pod \"94badb37-ba00-48fa-a728-26d834b5409c\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.846011 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2mnf\" (UniqueName: \"kubernetes.io/projected/94badb37-ba00-48fa-a728-26d834b5409c-kube-api-access-m2mnf\") pod \"94badb37-ba00-48fa-a728-26d834b5409c\" (UID: \"94badb37-ba00-48fa-a728-26d834b5409c\") " Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.853388 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "94badb37-ba00-48fa-a728-26d834b5409c" (UID: "94badb37-ba00-48fa-a728-26d834b5409c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.862209 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94badb37-ba00-48fa-a728-26d834b5409c-kube-api-access-m2mnf" (OuterVolumeSpecName: "kube-api-access-m2mnf") pod "94badb37-ba00-48fa-a728-26d834b5409c" (UID: "94badb37-ba00-48fa-a728-26d834b5409c"). InnerVolumeSpecName "kube-api-access-m2mnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.912724 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-config" (OuterVolumeSpecName: "config") pod "94badb37-ba00-48fa-a728-26d834b5409c" (UID: "94badb37-ba00-48fa-a728-26d834b5409c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.920584 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94badb37-ba00-48fa-a728-26d834b5409c" (UID: "94badb37-ba00-48fa-a728-26d834b5409c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.929358 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "94badb37-ba00-48fa-a728-26d834b5409c" (UID: "94badb37-ba00-48fa-a728-26d834b5409c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.934920 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "94badb37-ba00-48fa-a728-26d834b5409c" (UID: "94badb37-ba00-48fa-a728-26d834b5409c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.949048 4624 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.949081 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2mnf\" (UniqueName: \"kubernetes.io/projected/94badb37-ba00-48fa-a728-26d834b5409c-kube-api-access-m2mnf\") on node \"crc\" DevicePath \"\"" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.949094 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-config\") on node \"crc\" DevicePath \"\"" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.949107 4624 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.949116 4624 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.949127 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 16:55:19 crc kubenswrapper[4624]: I1008 16:55:19.959490 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "94badb37-ba00-48fa-a728-26d834b5409c" (UID: "94badb37-ba00-48fa-a728-26d834b5409c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 16:55:20 crc kubenswrapper[4624]: I1008 16:55:20.050817 4624 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/94badb37-ba00-48fa-a728-26d834b5409c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 16:55:20 crc kubenswrapper[4624]: I1008 16:55:20.109136 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d887b9d4f-whfwt"] Oct 08 16:55:20 crc kubenswrapper[4624]: I1008 16:55:20.117077 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d887b9d4f-whfwt"] Oct 08 16:55:21 crc kubenswrapper[4624]: I1008 16:55:21.478542 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94badb37-ba00-48fa-a728-26d834b5409c" path="/var/lib/kubelet/pods/94badb37-ba00-48fa-a728-26d834b5409c/volumes" Oct 08 16:55:30 crc kubenswrapper[4624]: I1008 16:55:30.076276 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:55:30 crc kubenswrapper[4624]: I1008 16:55:30.076922 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:56:00 crc kubenswrapper[4624]: I1008 16:56:00.077113 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 16:56:00 crc kubenswrapper[4624]: I1008 16:56:00.077707 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 16:56:00 crc kubenswrapper[4624]: I1008 16:56:00.077772 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 16:56:00 crc kubenswrapper[4624]: I1008 16:56:00.078868 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 16:56:00 crc kubenswrapper[4624]: I1008 16:56:00.078936 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" gracePeriod=600 Oct 08 16:56:00 crc kubenswrapper[4624]: E1008 16:56:00.201791 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:56:01 crc kubenswrapper[4624]: I1008 16:56:01.198580 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" exitCode=0 Oct 08 16:56:01 crc kubenswrapper[4624]: I1008 16:56:01.198725 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f"} Oct 08 16:56:01 crc kubenswrapper[4624]: I1008 16:56:01.198965 4624 scope.go:117] "RemoveContainer" containerID="50ee5258160198d5016c84c2f3e1f3e20456097dc0bfff45720a9b16ffefe68b" Oct 08 16:56:01 crc kubenswrapper[4624]: I1008 16:56:01.200075 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 16:56:01 crc kubenswrapper[4624]: E1008 16:56:01.200530 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:56:15 crc kubenswrapper[4624]: I1008 16:56:15.474448 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 16:56:15 crc kubenswrapper[4624]: E1008 16:56:15.476482 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:56:27 crc kubenswrapper[4624]: I1008 16:56:27.465923 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 16:56:27 crc kubenswrapper[4624]: E1008 16:56:27.467118 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:56:39 crc kubenswrapper[4624]: I1008 16:56:39.471843 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 16:56:39 crc kubenswrapper[4624]: E1008 16:56:39.475012 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:56:54 crc kubenswrapper[4624]: I1008 16:56:54.466371 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 16:56:54 crc kubenswrapper[4624]: E1008 16:56:54.468493 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:57:04 crc kubenswrapper[4624]: E1008 16:57:04.309563 4624 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.154:42818->38.102.83.154:39627: write tcp 38.102.83.154:42818->38.102.83.154:39627: write: broken pipe Oct 08 16:57:08 crc kubenswrapper[4624]: I1008 16:57:08.466034 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 16:57:08 crc kubenswrapper[4624]: E1008 16:57:08.466951 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:57:19 crc kubenswrapper[4624]: I1008 16:57:19.466724 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 16:57:19 crc kubenswrapper[4624]: E1008 16:57:19.468938 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:57:33 crc kubenswrapper[4624]: I1008 16:57:33.466047 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 16:57:33 crc kubenswrapper[4624]: E1008 16:57:33.470805 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:57:47 crc kubenswrapper[4624]: I1008 16:57:47.467283 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 16:57:47 crc kubenswrapper[4624]: E1008 16:57:47.469413 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:58:00 crc kubenswrapper[4624]: I1008 16:58:00.465475 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 16:58:00 crc kubenswrapper[4624]: E1008 16:58:00.466513 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.565592 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tsjjx"] Oct 08 16:58:01 crc kubenswrapper[4624]: E1008 16:58:01.566161 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94badb37-ba00-48fa-a728-26d834b5409c" containerName="neutron-api" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.566175 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="94badb37-ba00-48fa-a728-26d834b5409c" containerName="neutron-api" Oct 08 16:58:01 crc kubenswrapper[4624]: E1008 16:58:01.566185 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9367857-bea8-4ea3-82e4-f8be6b794087" containerName="extract-utilities" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.566192 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9367857-bea8-4ea3-82e4-f8be6b794087" containerName="extract-utilities" Oct 08 16:58:01 crc kubenswrapper[4624]: E1008 16:58:01.566206 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9367857-bea8-4ea3-82e4-f8be6b794087" containerName="extract-content" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.566215 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9367857-bea8-4ea3-82e4-f8be6b794087" containerName="extract-content" Oct 08 16:58:01 crc kubenswrapper[4624]: E1008 16:58:01.566227 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9367857-bea8-4ea3-82e4-f8be6b794087" containerName="registry-server" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.566235 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9367857-bea8-4ea3-82e4-f8be6b794087" containerName="registry-server" Oct 08 16:58:01 crc kubenswrapper[4624]: E1008 16:58:01.566246 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94badb37-ba00-48fa-a728-26d834b5409c" containerName="neutron-httpd" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.566254 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="94badb37-ba00-48fa-a728-26d834b5409c" containerName="neutron-httpd" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.566462 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9367857-bea8-4ea3-82e4-f8be6b794087" containerName="registry-server" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.566480 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="94badb37-ba00-48fa-a728-26d834b5409c" containerName="neutron-httpd" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.566495 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="94badb37-ba00-48fa-a728-26d834b5409c" containerName="neutron-api" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.593661 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tsjjx" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.601564 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tsjjx"] Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.652162 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48cde6b1-19c6-41f2-9af3-c83cf683909f-catalog-content\") pod \"redhat-marketplace-tsjjx\" (UID: \"48cde6b1-19c6-41f2-9af3-c83cf683909f\") " pod="openshift-marketplace/redhat-marketplace-tsjjx" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.652622 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b464\" (UniqueName: \"kubernetes.io/projected/48cde6b1-19c6-41f2-9af3-c83cf683909f-kube-api-access-5b464\") pod \"redhat-marketplace-tsjjx\" (UID: \"48cde6b1-19c6-41f2-9af3-c83cf683909f\") " pod="openshift-marketplace/redhat-marketplace-tsjjx" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.652721 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48cde6b1-19c6-41f2-9af3-c83cf683909f-utilities\") pod \"redhat-marketplace-tsjjx\" (UID: \"48cde6b1-19c6-41f2-9af3-c83cf683909f\") " pod="openshift-marketplace/redhat-marketplace-tsjjx" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.755088 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48cde6b1-19c6-41f2-9af3-c83cf683909f-catalog-content\") pod \"redhat-marketplace-tsjjx\" (UID: \"48cde6b1-19c6-41f2-9af3-c83cf683909f\") " pod="openshift-marketplace/redhat-marketplace-tsjjx" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.755207 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b464\" (UniqueName: \"kubernetes.io/projected/48cde6b1-19c6-41f2-9af3-c83cf683909f-kube-api-access-5b464\") pod \"redhat-marketplace-tsjjx\" (UID: \"48cde6b1-19c6-41f2-9af3-c83cf683909f\") " pod="openshift-marketplace/redhat-marketplace-tsjjx" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.755299 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48cde6b1-19c6-41f2-9af3-c83cf683909f-utilities\") pod \"redhat-marketplace-tsjjx\" (UID: \"48cde6b1-19c6-41f2-9af3-c83cf683909f\") " pod="openshift-marketplace/redhat-marketplace-tsjjx" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.755852 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48cde6b1-19c6-41f2-9af3-c83cf683909f-catalog-content\") pod \"redhat-marketplace-tsjjx\" (UID: \"48cde6b1-19c6-41f2-9af3-c83cf683909f\") " pod="openshift-marketplace/redhat-marketplace-tsjjx" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.756080 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48cde6b1-19c6-41f2-9af3-c83cf683909f-utilities\") pod \"redhat-marketplace-tsjjx\" (UID: \"48cde6b1-19c6-41f2-9af3-c83cf683909f\") " pod="openshift-marketplace/redhat-marketplace-tsjjx" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.778221 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b464\" (UniqueName: \"kubernetes.io/projected/48cde6b1-19c6-41f2-9af3-c83cf683909f-kube-api-access-5b464\") pod \"redhat-marketplace-tsjjx\" (UID: \"48cde6b1-19c6-41f2-9af3-c83cf683909f\") " pod="openshift-marketplace/redhat-marketplace-tsjjx" Oct 08 16:58:01 crc kubenswrapper[4624]: I1008 16:58:01.937149 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tsjjx" Oct 08 16:58:02 crc kubenswrapper[4624]: I1008 16:58:02.662482 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tsjjx"] Oct 08 16:58:03 crc kubenswrapper[4624]: I1008 16:58:03.444592 4624 generic.go:334] "Generic (PLEG): container finished" podID="48cde6b1-19c6-41f2-9af3-c83cf683909f" containerID="1b3582496af242a04dc312fa55fb829c4bfe6b1d8d9c5ad05a4803a0364b4648" exitCode=0 Oct 08 16:58:03 crc kubenswrapper[4624]: I1008 16:58:03.444708 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsjjx" event={"ID":"48cde6b1-19c6-41f2-9af3-c83cf683909f","Type":"ContainerDied","Data":"1b3582496af242a04dc312fa55fb829c4bfe6b1d8d9c5ad05a4803a0364b4648"} Oct 08 16:58:03 crc kubenswrapper[4624]: I1008 16:58:03.445226 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsjjx" event={"ID":"48cde6b1-19c6-41f2-9af3-c83cf683909f","Type":"ContainerStarted","Data":"a91ebea5aabd45c9c4632c04ee37f59cb57dd514b0b0fc726eb81a02c6b30f0f"} Oct 08 16:58:04 crc kubenswrapper[4624]: I1008 16:58:04.459011 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsjjx" event={"ID":"48cde6b1-19c6-41f2-9af3-c83cf683909f","Type":"ContainerStarted","Data":"45baf4f17b1701d048e2861356fb66da29d60fea3a99d6faddf014691187cf70"} Oct 08 16:58:05 crc kubenswrapper[4624]: I1008 16:58:05.493594 4624 generic.go:334] "Generic (PLEG): container finished" podID="48cde6b1-19c6-41f2-9af3-c83cf683909f" containerID="45baf4f17b1701d048e2861356fb66da29d60fea3a99d6faddf014691187cf70" exitCode=0 Oct 08 16:58:05 crc kubenswrapper[4624]: I1008 16:58:05.515604 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsjjx" event={"ID":"48cde6b1-19c6-41f2-9af3-c83cf683909f","Type":"ContainerDied","Data":"45baf4f17b1701d048e2861356fb66da29d60fea3a99d6faddf014691187cf70"} Oct 08 16:58:06 crc kubenswrapper[4624]: I1008 16:58:06.510961 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsjjx" event={"ID":"48cde6b1-19c6-41f2-9af3-c83cf683909f","Type":"ContainerStarted","Data":"515166e51142e45bb528b14c047234f6a911228c502e2c9f300aabe6671db53f"} Oct 08 16:58:06 crc kubenswrapper[4624]: I1008 16:58:06.543770 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tsjjx" podStartSLOduration=3.049101864 podStartE2EDuration="5.543744281s" podCreationTimestamp="2025-10-08 16:58:01 +0000 UTC" firstStartedPulling="2025-10-08 16:58:03.446975148 +0000 UTC m=+9308.597910225" lastFinishedPulling="2025-10-08 16:58:05.941617565 +0000 UTC m=+9311.092552642" observedRunningTime="2025-10-08 16:58:06.534484263 +0000 UTC m=+9311.685419380" watchObservedRunningTime="2025-10-08 16:58:06.543744281 +0000 UTC m=+9311.694679358" Oct 08 16:58:11 crc kubenswrapper[4624]: I1008 16:58:11.937724 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tsjjx" Oct 08 16:58:11 crc kubenswrapper[4624]: I1008 16:58:11.938309 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tsjjx" Oct 08 16:58:11 crc kubenswrapper[4624]: I1008 16:58:11.986358 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tsjjx" Oct 08 16:58:12 crc kubenswrapper[4624]: I1008 16:58:12.465736 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 16:58:12 crc kubenswrapper[4624]: E1008 16:58:12.466002 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:58:12 crc kubenswrapper[4624]: I1008 16:58:12.626548 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tsjjx" Oct 08 16:58:12 crc kubenswrapper[4624]: I1008 16:58:12.678469 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tsjjx"] Oct 08 16:58:14 crc kubenswrapper[4624]: I1008 16:58:14.597494 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tsjjx" podUID="48cde6b1-19c6-41f2-9af3-c83cf683909f" containerName="registry-server" containerID="cri-o://515166e51142e45bb528b14c047234f6a911228c502e2c9f300aabe6671db53f" gracePeriod=2 Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.129841 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tsjjx" Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.298246 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b464\" (UniqueName: \"kubernetes.io/projected/48cde6b1-19c6-41f2-9af3-c83cf683909f-kube-api-access-5b464\") pod \"48cde6b1-19c6-41f2-9af3-c83cf683909f\" (UID: \"48cde6b1-19c6-41f2-9af3-c83cf683909f\") " Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.298593 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48cde6b1-19c6-41f2-9af3-c83cf683909f-catalog-content\") pod \"48cde6b1-19c6-41f2-9af3-c83cf683909f\" (UID: \"48cde6b1-19c6-41f2-9af3-c83cf683909f\") " Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.298781 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48cde6b1-19c6-41f2-9af3-c83cf683909f-utilities\") pod \"48cde6b1-19c6-41f2-9af3-c83cf683909f\" (UID: \"48cde6b1-19c6-41f2-9af3-c83cf683909f\") " Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.300449 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48cde6b1-19c6-41f2-9af3-c83cf683909f-utilities" (OuterVolumeSpecName: "utilities") pod "48cde6b1-19c6-41f2-9af3-c83cf683909f" (UID: "48cde6b1-19c6-41f2-9af3-c83cf683909f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.308060 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48cde6b1-19c6-41f2-9af3-c83cf683909f-kube-api-access-5b464" (OuterVolumeSpecName: "kube-api-access-5b464") pod "48cde6b1-19c6-41f2-9af3-c83cf683909f" (UID: "48cde6b1-19c6-41f2-9af3-c83cf683909f"). InnerVolumeSpecName "kube-api-access-5b464". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.317878 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48cde6b1-19c6-41f2-9af3-c83cf683909f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48cde6b1-19c6-41f2-9af3-c83cf683909f" (UID: "48cde6b1-19c6-41f2-9af3-c83cf683909f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.401281 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b464\" (UniqueName: \"kubernetes.io/projected/48cde6b1-19c6-41f2-9af3-c83cf683909f-kube-api-access-5b464\") on node \"crc\" DevicePath \"\"" Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.401335 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48cde6b1-19c6-41f2-9af3-c83cf683909f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.401350 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48cde6b1-19c6-41f2-9af3-c83cf683909f-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.611982 4624 generic.go:334] "Generic (PLEG): container finished" podID="48cde6b1-19c6-41f2-9af3-c83cf683909f" containerID="515166e51142e45bb528b14c047234f6a911228c502e2c9f300aabe6671db53f" exitCode=0 Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.612041 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsjjx" event={"ID":"48cde6b1-19c6-41f2-9af3-c83cf683909f","Type":"ContainerDied","Data":"515166e51142e45bb528b14c047234f6a911228c502e2c9f300aabe6671db53f"} Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.612824 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsjjx" event={"ID":"48cde6b1-19c6-41f2-9af3-c83cf683909f","Type":"ContainerDied","Data":"a91ebea5aabd45c9c4632c04ee37f59cb57dd514b0b0fc726eb81a02c6b30f0f"} Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.612857 4624 scope.go:117] "RemoveContainer" containerID="515166e51142e45bb528b14c047234f6a911228c502e2c9f300aabe6671db53f" Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.612106 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tsjjx" Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.655728 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tsjjx"] Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.670280 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tsjjx"] Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.682918 4624 scope.go:117] "RemoveContainer" containerID="45baf4f17b1701d048e2861356fb66da29d60fea3a99d6faddf014691187cf70" Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.713279 4624 scope.go:117] "RemoveContainer" containerID="1b3582496af242a04dc312fa55fb829c4bfe6b1d8d9c5ad05a4803a0364b4648" Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.763022 4624 scope.go:117] "RemoveContainer" containerID="515166e51142e45bb528b14c047234f6a911228c502e2c9f300aabe6671db53f" Oct 08 16:58:15 crc kubenswrapper[4624]: E1008 16:58:15.764007 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"515166e51142e45bb528b14c047234f6a911228c502e2c9f300aabe6671db53f\": container with ID starting with 515166e51142e45bb528b14c047234f6a911228c502e2c9f300aabe6671db53f not found: ID does not exist" containerID="515166e51142e45bb528b14c047234f6a911228c502e2c9f300aabe6671db53f" Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.764044 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515166e51142e45bb528b14c047234f6a911228c502e2c9f300aabe6671db53f"} err="failed to get container status \"515166e51142e45bb528b14c047234f6a911228c502e2c9f300aabe6671db53f\": rpc error: code = NotFound desc = could not find container \"515166e51142e45bb528b14c047234f6a911228c502e2c9f300aabe6671db53f\": container with ID starting with 515166e51142e45bb528b14c047234f6a911228c502e2c9f300aabe6671db53f not found: ID does not exist" Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.764068 4624 scope.go:117] "RemoveContainer" containerID="45baf4f17b1701d048e2861356fb66da29d60fea3a99d6faddf014691187cf70" Oct 08 16:58:15 crc kubenswrapper[4624]: E1008 16:58:15.764490 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45baf4f17b1701d048e2861356fb66da29d60fea3a99d6faddf014691187cf70\": container with ID starting with 45baf4f17b1701d048e2861356fb66da29d60fea3a99d6faddf014691187cf70 not found: ID does not exist" containerID="45baf4f17b1701d048e2861356fb66da29d60fea3a99d6faddf014691187cf70" Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.764514 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45baf4f17b1701d048e2861356fb66da29d60fea3a99d6faddf014691187cf70"} err="failed to get container status \"45baf4f17b1701d048e2861356fb66da29d60fea3a99d6faddf014691187cf70\": rpc error: code = NotFound desc = could not find container \"45baf4f17b1701d048e2861356fb66da29d60fea3a99d6faddf014691187cf70\": container with ID starting with 45baf4f17b1701d048e2861356fb66da29d60fea3a99d6faddf014691187cf70 not found: ID does not exist" Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.764529 4624 scope.go:117] "RemoveContainer" containerID="1b3582496af242a04dc312fa55fb829c4bfe6b1d8d9c5ad05a4803a0364b4648" Oct 08 16:58:15 crc kubenswrapper[4624]: E1008 16:58:15.764995 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b3582496af242a04dc312fa55fb829c4bfe6b1d8d9c5ad05a4803a0364b4648\": container with ID starting with 1b3582496af242a04dc312fa55fb829c4bfe6b1d8d9c5ad05a4803a0364b4648 not found: ID does not exist" containerID="1b3582496af242a04dc312fa55fb829c4bfe6b1d8d9c5ad05a4803a0364b4648" Oct 08 16:58:15 crc kubenswrapper[4624]: I1008 16:58:15.765046 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b3582496af242a04dc312fa55fb829c4bfe6b1d8d9c5ad05a4803a0364b4648"} err="failed to get container status \"1b3582496af242a04dc312fa55fb829c4bfe6b1d8d9c5ad05a4803a0364b4648\": rpc error: code = NotFound desc = could not find container \"1b3582496af242a04dc312fa55fb829c4bfe6b1d8d9c5ad05a4803a0364b4648\": container with ID starting with 1b3582496af242a04dc312fa55fb829c4bfe6b1d8d9c5ad05a4803a0364b4648 not found: ID does not exist" Oct 08 16:58:17 crc kubenswrapper[4624]: I1008 16:58:17.482511 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48cde6b1-19c6-41f2-9af3-c83cf683909f" path="/var/lib/kubelet/pods/48cde6b1-19c6-41f2-9af3-c83cf683909f/volumes" Oct 08 16:58:27 crc kubenswrapper[4624]: I1008 16:58:27.469536 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 16:58:27 crc kubenswrapper[4624]: E1008 16:58:27.470297 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:58:42 crc kubenswrapper[4624]: I1008 16:58:42.467115 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 16:58:42 crc kubenswrapper[4624]: E1008 16:58:42.468392 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.018573 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hg6dg"] Oct 08 16:58:54 crc kubenswrapper[4624]: E1008 16:58:54.019446 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48cde6b1-19c6-41f2-9af3-c83cf683909f" containerName="extract-utilities" Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.019459 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="48cde6b1-19c6-41f2-9af3-c83cf683909f" containerName="extract-utilities" Oct 08 16:58:54 crc kubenswrapper[4624]: E1008 16:58:54.019482 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48cde6b1-19c6-41f2-9af3-c83cf683909f" containerName="extract-content" Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.019491 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="48cde6b1-19c6-41f2-9af3-c83cf683909f" containerName="extract-content" Oct 08 16:58:54 crc kubenswrapper[4624]: E1008 16:58:54.019503 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48cde6b1-19c6-41f2-9af3-c83cf683909f" containerName="registry-server" Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.019509 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="48cde6b1-19c6-41f2-9af3-c83cf683909f" containerName="registry-server" Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.019769 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="48cde6b1-19c6-41f2-9af3-c83cf683909f" containerName="registry-server" Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.021245 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hg6dg" Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.049969 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hg6dg"] Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.097836 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf-utilities\") pod \"redhat-operators-hg6dg\" (UID: \"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf\") " pod="openshift-marketplace/redhat-operators-hg6dg" Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.097962 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf-catalog-content\") pod \"redhat-operators-hg6dg\" (UID: \"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf\") " pod="openshift-marketplace/redhat-operators-hg6dg" Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.097998 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk5ms\" (UniqueName: \"kubernetes.io/projected/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf-kube-api-access-zk5ms\") pod \"redhat-operators-hg6dg\" (UID: \"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf\") " pod="openshift-marketplace/redhat-operators-hg6dg" Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.199852 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf-utilities\") pod \"redhat-operators-hg6dg\" (UID: \"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf\") " pod="openshift-marketplace/redhat-operators-hg6dg" Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.199911 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf-catalog-content\") pod \"redhat-operators-hg6dg\" (UID: \"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf\") " pod="openshift-marketplace/redhat-operators-hg6dg" Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.199937 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk5ms\" (UniqueName: \"kubernetes.io/projected/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf-kube-api-access-zk5ms\") pod \"redhat-operators-hg6dg\" (UID: \"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf\") " pod="openshift-marketplace/redhat-operators-hg6dg" Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.200746 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf-utilities\") pod \"redhat-operators-hg6dg\" (UID: \"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf\") " pod="openshift-marketplace/redhat-operators-hg6dg" Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.201015 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf-catalog-content\") pod \"redhat-operators-hg6dg\" (UID: \"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf\") " pod="openshift-marketplace/redhat-operators-hg6dg" Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.230756 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk5ms\" (UniqueName: \"kubernetes.io/projected/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf-kube-api-access-zk5ms\") pod \"redhat-operators-hg6dg\" (UID: \"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf\") " pod="openshift-marketplace/redhat-operators-hg6dg" Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.347869 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hg6dg" Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.468279 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 16:58:54 crc kubenswrapper[4624]: E1008 16:58:54.468550 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:58:54 crc kubenswrapper[4624]: I1008 16:58:54.956828 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hg6dg"] Oct 08 16:58:55 crc kubenswrapper[4624]: I1008 16:58:55.028309 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hg6dg" event={"ID":"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf","Type":"ContainerStarted","Data":"24b248d8a8f0cf42fe9c569161271b49e8c8a390f70a1896a98a21f4447717ec"} Oct 08 16:58:56 crc kubenswrapper[4624]: I1008 16:58:56.038821 4624 generic.go:334] "Generic (PLEG): container finished" podID="12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" containerID="41558e5f698e9b29342ba0109b726bd6fc326b29db2298bc7ab3320ba1897ca8" exitCode=0 Oct 08 16:58:56 crc kubenswrapper[4624]: I1008 16:58:56.038873 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hg6dg" event={"ID":"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf","Type":"ContainerDied","Data":"41558e5f698e9b29342ba0109b726bd6fc326b29db2298bc7ab3320ba1897ca8"} Oct 08 16:58:58 crc kubenswrapper[4624]: I1008 16:58:58.062788 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hg6dg" event={"ID":"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf","Type":"ContainerStarted","Data":"13143fed23ffa71ce36cb421382e9831134c318c62b9d0a604799a25b52a01f3"} Oct 08 16:59:02 crc kubenswrapper[4624]: I1008 16:59:02.111461 4624 generic.go:334] "Generic (PLEG): container finished" podID="12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" containerID="13143fed23ffa71ce36cb421382e9831134c318c62b9d0a604799a25b52a01f3" exitCode=0 Oct 08 16:59:02 crc kubenswrapper[4624]: I1008 16:59:02.111556 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hg6dg" event={"ID":"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf","Type":"ContainerDied","Data":"13143fed23ffa71ce36cb421382e9831134c318c62b9d0a604799a25b52a01f3"} Oct 08 16:59:03 crc kubenswrapper[4624]: I1008 16:59:03.124113 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hg6dg" event={"ID":"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf","Type":"ContainerStarted","Data":"d1c7a98f8ea2be6f87eec38d29803ed72fcb0241edc2bc6e02bef904f8349b2a"} Oct 08 16:59:03 crc kubenswrapper[4624]: I1008 16:59:03.154987 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hg6dg" podStartSLOduration=3.664022269 podStartE2EDuration="10.154963089s" podCreationTimestamp="2025-10-08 16:58:53 +0000 UTC" firstStartedPulling="2025-10-08 16:58:56.042368291 +0000 UTC m=+9361.193303378" lastFinishedPulling="2025-10-08 16:59:02.533309121 +0000 UTC m=+9367.684244198" observedRunningTime="2025-10-08 16:59:03.152316311 +0000 UTC m=+9368.303251478" watchObservedRunningTime="2025-10-08 16:59:03.154963089 +0000 UTC m=+9368.305898166" Oct 08 16:59:04 crc kubenswrapper[4624]: I1008 16:59:04.348167 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hg6dg" Oct 08 16:59:04 crc kubenswrapper[4624]: I1008 16:59:04.348236 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hg6dg" Oct 08 16:59:05 crc kubenswrapper[4624]: I1008 16:59:05.405443 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hg6dg" podUID="12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" containerName="registry-server" probeResult="failure" output=< Oct 08 16:59:05 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:59:05 crc kubenswrapper[4624]: > Oct 08 16:59:09 crc kubenswrapper[4624]: I1008 16:59:09.466959 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 16:59:09 crc kubenswrapper[4624]: E1008 16:59:09.476499 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:59:15 crc kubenswrapper[4624]: I1008 16:59:15.450423 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hg6dg" podUID="12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" containerName="registry-server" probeResult="failure" output=< Oct 08 16:59:15 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:59:15 crc kubenswrapper[4624]: > Oct 08 16:59:21 crc kubenswrapper[4624]: I1008 16:59:21.465741 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 16:59:21 crc kubenswrapper[4624]: E1008 16:59:21.466438 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:59:25 crc kubenswrapper[4624]: I1008 16:59:25.404928 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hg6dg" podUID="12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" containerName="registry-server" probeResult="failure" output=< Oct 08 16:59:25 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:59:25 crc kubenswrapper[4624]: > Oct 08 16:59:34 crc kubenswrapper[4624]: I1008 16:59:34.466760 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 16:59:34 crc kubenswrapper[4624]: E1008 16:59:34.467508 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:59:35 crc kubenswrapper[4624]: I1008 16:59:35.401957 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hg6dg" podUID="12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" containerName="registry-server" probeResult="failure" output=< Oct 08 16:59:35 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:59:35 crc kubenswrapper[4624]: > Oct 08 16:59:41 crc kubenswrapper[4624]: I1008 16:59:41.456447 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xkngr"] Oct 08 16:59:41 crc kubenswrapper[4624]: I1008 16:59:41.459851 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkngr" Oct 08 16:59:41 crc kubenswrapper[4624]: I1008 16:59:41.484540 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xkngr"] Oct 08 16:59:41 crc kubenswrapper[4624]: I1008 16:59:41.635557 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec894cdd-6896-4c48-a533-9d57da994554-catalog-content\") pod \"certified-operators-xkngr\" (UID: \"ec894cdd-6896-4c48-a533-9d57da994554\") " pod="openshift-marketplace/certified-operators-xkngr" Oct 08 16:59:41 crc kubenswrapper[4624]: I1008 16:59:41.636162 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec894cdd-6896-4c48-a533-9d57da994554-utilities\") pod \"certified-operators-xkngr\" (UID: \"ec894cdd-6896-4c48-a533-9d57da994554\") " pod="openshift-marketplace/certified-operators-xkngr" Oct 08 16:59:41 crc kubenswrapper[4624]: I1008 16:59:41.636251 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx96g\" (UniqueName: \"kubernetes.io/projected/ec894cdd-6896-4c48-a533-9d57da994554-kube-api-access-lx96g\") pod \"certified-operators-xkngr\" (UID: \"ec894cdd-6896-4c48-a533-9d57da994554\") " pod="openshift-marketplace/certified-operators-xkngr" Oct 08 16:59:41 crc kubenswrapper[4624]: I1008 16:59:41.738682 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec894cdd-6896-4c48-a533-9d57da994554-catalog-content\") pod \"certified-operators-xkngr\" (UID: \"ec894cdd-6896-4c48-a533-9d57da994554\") " pod="openshift-marketplace/certified-operators-xkngr" Oct 08 16:59:41 crc kubenswrapper[4624]: I1008 16:59:41.738818 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec894cdd-6896-4c48-a533-9d57da994554-utilities\") pod \"certified-operators-xkngr\" (UID: \"ec894cdd-6896-4c48-a533-9d57da994554\") " pod="openshift-marketplace/certified-operators-xkngr" Oct 08 16:59:41 crc kubenswrapper[4624]: I1008 16:59:41.738866 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx96g\" (UniqueName: \"kubernetes.io/projected/ec894cdd-6896-4c48-a533-9d57da994554-kube-api-access-lx96g\") pod \"certified-operators-xkngr\" (UID: \"ec894cdd-6896-4c48-a533-9d57da994554\") " pod="openshift-marketplace/certified-operators-xkngr" Oct 08 16:59:41 crc kubenswrapper[4624]: I1008 16:59:41.742180 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec894cdd-6896-4c48-a533-9d57da994554-catalog-content\") pod \"certified-operators-xkngr\" (UID: \"ec894cdd-6896-4c48-a533-9d57da994554\") " pod="openshift-marketplace/certified-operators-xkngr" Oct 08 16:59:41 crc kubenswrapper[4624]: I1008 16:59:41.742631 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec894cdd-6896-4c48-a533-9d57da994554-utilities\") pod \"certified-operators-xkngr\" (UID: \"ec894cdd-6896-4c48-a533-9d57da994554\") " pod="openshift-marketplace/certified-operators-xkngr" Oct 08 16:59:41 crc kubenswrapper[4624]: I1008 16:59:41.783237 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx96g\" (UniqueName: \"kubernetes.io/projected/ec894cdd-6896-4c48-a533-9d57da994554-kube-api-access-lx96g\") pod \"certified-operators-xkngr\" (UID: \"ec894cdd-6896-4c48-a533-9d57da994554\") " pod="openshift-marketplace/certified-operators-xkngr" Oct 08 16:59:41 crc kubenswrapper[4624]: I1008 16:59:41.792267 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkngr" Oct 08 16:59:43 crc kubenswrapper[4624]: I1008 16:59:43.193761 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xkngr"] Oct 08 16:59:43 crc kubenswrapper[4624]: I1008 16:59:43.523504 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkngr" event={"ID":"ec894cdd-6896-4c48-a533-9d57da994554","Type":"ContainerStarted","Data":"27f5a70f6370127ee79bd3be0b47fc42b4eebea7b25e2a34c49c6be73b9d1d0d"} Oct 08 16:59:44 crc kubenswrapper[4624]: I1008 16:59:44.538004 4624 generic.go:334] "Generic (PLEG): container finished" podID="ec894cdd-6896-4c48-a533-9d57da994554" containerID="b940560c6cdc9545c2bfe545c260af93680e5520911af819d367ca81ff8f09fb" exitCode=0 Oct 08 16:59:44 crc kubenswrapper[4624]: I1008 16:59:44.538169 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkngr" event={"ID":"ec894cdd-6896-4c48-a533-9d57da994554","Type":"ContainerDied","Data":"b940560c6cdc9545c2bfe545c260af93680e5520911af819d367ca81ff8f09fb"} Oct 08 16:59:44 crc kubenswrapper[4624]: I1008 16:59:44.612403 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hg6dg" Oct 08 16:59:44 crc kubenswrapper[4624]: I1008 16:59:44.668894 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hg6dg" Oct 08 16:59:46 crc kubenswrapper[4624]: I1008 16:59:46.245003 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hg6dg"] Oct 08 16:59:46 crc kubenswrapper[4624]: I1008 16:59:46.466578 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 16:59:46 crc kubenswrapper[4624]: E1008 16:59:46.467341 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 16:59:46 crc kubenswrapper[4624]: I1008 16:59:46.558434 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkngr" event={"ID":"ec894cdd-6896-4c48-a533-9d57da994554","Type":"ContainerStarted","Data":"0b30c7115d354ff42a5bb7c71ed5f9ea299b0b335db8abbefddd29b7a731583e"} Oct 08 16:59:46 crc kubenswrapper[4624]: I1008 16:59:46.558602 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hg6dg" podUID="12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" containerName="registry-server" containerID="cri-o://d1c7a98f8ea2be6f87eec38d29803ed72fcb0241edc2bc6e02bef904f8349b2a" gracePeriod=2 Oct 08 16:59:47 crc kubenswrapper[4624]: I1008 16:59:47.578808 4624 generic.go:334] "Generic (PLEG): container finished" podID="12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" containerID="d1c7a98f8ea2be6f87eec38d29803ed72fcb0241edc2bc6e02bef904f8349b2a" exitCode=0 Oct 08 16:59:47 crc kubenswrapper[4624]: I1008 16:59:47.579009 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hg6dg" event={"ID":"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf","Type":"ContainerDied","Data":"d1c7a98f8ea2be6f87eec38d29803ed72fcb0241edc2bc6e02bef904f8349b2a"} Oct 08 16:59:47 crc kubenswrapper[4624]: I1008 16:59:47.947406 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hg6dg" Oct 08 16:59:48 crc kubenswrapper[4624]: I1008 16:59:48.087647 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk5ms\" (UniqueName: \"kubernetes.io/projected/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf-kube-api-access-zk5ms\") pod \"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf\" (UID: \"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf\") " Oct 08 16:59:48 crc kubenswrapper[4624]: I1008 16:59:48.087759 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf-catalog-content\") pod \"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf\" (UID: \"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf\") " Oct 08 16:59:48 crc kubenswrapper[4624]: I1008 16:59:48.087886 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf-utilities\") pod \"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf\" (UID: \"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf\") " Oct 08 16:59:48 crc kubenswrapper[4624]: I1008 16:59:48.090790 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf-utilities" (OuterVolumeSpecName: "utilities") pod "12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" (UID: "12b06a98-e86b-4321-9bf4-7cb10cfaf0cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:59:48 crc kubenswrapper[4624]: I1008 16:59:48.112865 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf-kube-api-access-zk5ms" (OuterVolumeSpecName: "kube-api-access-zk5ms") pod "12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" (UID: "12b06a98-e86b-4321-9bf4-7cb10cfaf0cf"). InnerVolumeSpecName "kube-api-access-zk5ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 16:59:48 crc kubenswrapper[4624]: I1008 16:59:48.172098 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" (UID: "12b06a98-e86b-4321-9bf4-7cb10cfaf0cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 16:59:48 crc kubenswrapper[4624]: I1008 16:59:48.190172 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 16:59:48 crc kubenswrapper[4624]: I1008 16:59:48.190218 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk5ms\" (UniqueName: \"kubernetes.io/projected/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf-kube-api-access-zk5ms\") on node \"crc\" DevicePath \"\"" Oct 08 16:59:48 crc kubenswrapper[4624]: I1008 16:59:48.190232 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 16:59:48 crc kubenswrapper[4624]: I1008 16:59:48.591840 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hg6dg" event={"ID":"12b06a98-e86b-4321-9bf4-7cb10cfaf0cf","Type":"ContainerDied","Data":"24b248d8a8f0cf42fe9c569161271b49e8c8a390f70a1896a98a21f4447717ec"} Oct 08 16:59:48 crc kubenswrapper[4624]: I1008 16:59:48.593317 4624 scope.go:117] "RemoveContainer" containerID="d1c7a98f8ea2be6f87eec38d29803ed72fcb0241edc2bc6e02bef904f8349b2a" Oct 08 16:59:48 crc kubenswrapper[4624]: I1008 16:59:48.591976 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hg6dg" Oct 08 16:59:48 crc kubenswrapper[4624]: I1008 16:59:48.593997 4624 generic.go:334] "Generic (PLEG): container finished" podID="ec894cdd-6896-4c48-a533-9d57da994554" containerID="0b30c7115d354ff42a5bb7c71ed5f9ea299b0b335db8abbefddd29b7a731583e" exitCode=0 Oct 08 16:59:48 crc kubenswrapper[4624]: I1008 16:59:48.594040 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkngr" event={"ID":"ec894cdd-6896-4c48-a533-9d57da994554","Type":"ContainerDied","Data":"0b30c7115d354ff42a5bb7c71ed5f9ea299b0b335db8abbefddd29b7a731583e"} Oct 08 16:59:48 crc kubenswrapper[4624]: I1008 16:59:48.646751 4624 scope.go:117] "RemoveContainer" containerID="13143fed23ffa71ce36cb421382e9831134c318c62b9d0a604799a25b52a01f3" Oct 08 16:59:48 crc kubenswrapper[4624]: I1008 16:59:48.659310 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hg6dg"] Oct 08 16:59:48 crc kubenswrapper[4624]: I1008 16:59:48.668706 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hg6dg"] Oct 08 16:59:48 crc kubenswrapper[4624]: I1008 16:59:48.678079 4624 scope.go:117] "RemoveContainer" containerID="41558e5f698e9b29342ba0109b726bd6fc326b29db2298bc7ab3320ba1897ca8" Oct 08 16:59:49 crc kubenswrapper[4624]: I1008 16:59:49.479977 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" path="/var/lib/kubelet/pods/12b06a98-e86b-4321-9bf4-7cb10cfaf0cf/volumes" Oct 08 16:59:49 crc kubenswrapper[4624]: I1008 16:59:49.605659 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkngr" event={"ID":"ec894cdd-6896-4c48-a533-9d57da994554","Type":"ContainerStarted","Data":"0b33447982c640b183e7b6be9c6adf39d2417132e0e8589f86967d066b16c470"} Oct 08 16:59:51 crc kubenswrapper[4624]: I1008 16:59:51.793300 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xkngr" Oct 08 16:59:51 crc kubenswrapper[4624]: I1008 16:59:51.793671 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xkngr" Oct 08 16:59:52 crc kubenswrapper[4624]: I1008 16:59:52.848209 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xkngr" podUID="ec894cdd-6896-4c48-a533-9d57da994554" containerName="registry-server" probeResult="failure" output=< Oct 08 16:59:52 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 16:59:52 crc kubenswrapper[4624]: > Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.314104 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xkngr" podStartSLOduration=14.78776627 podStartE2EDuration="19.314077438s" podCreationTimestamp="2025-10-08 16:59:41 +0000 UTC" firstStartedPulling="2025-10-08 16:59:44.543151168 +0000 UTC m=+9409.694086255" lastFinishedPulling="2025-10-08 16:59:49.069462356 +0000 UTC m=+9414.220397423" observedRunningTime="2025-10-08 16:59:49.627913091 +0000 UTC m=+9414.778848168" watchObservedRunningTime="2025-10-08 17:00:00.314077438 +0000 UTC m=+9425.465012515" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.318313 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc"] Oct 08 17:00:00 crc kubenswrapper[4624]: E1008 17:00:00.320002 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" containerName="registry-server" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.320034 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" containerName="registry-server" Oct 08 17:00:00 crc kubenswrapper[4624]: E1008 17:00:00.320058 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" containerName="extract-utilities" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.320067 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" containerName="extract-utilities" Oct 08 17:00:00 crc kubenswrapper[4624]: E1008 17:00:00.320094 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" containerName="extract-content" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.320102 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" containerName="extract-content" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.320415 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="12b06a98-e86b-4321-9bf4-7cb10cfaf0cf" containerName="registry-server" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.321292 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.337124 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc"] Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.342460 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.342485 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.452974 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64861e3b-016a-459f-b558-f5dad661b951-config-volume\") pod \"collect-profiles-29332380-hgxnc\" (UID: \"64861e3b-016a-459f-b558-f5dad661b951\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.453179 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpnpw\" (UniqueName: \"kubernetes.io/projected/64861e3b-016a-459f-b558-f5dad661b951-kube-api-access-jpnpw\") pod \"collect-profiles-29332380-hgxnc\" (UID: \"64861e3b-016a-459f-b558-f5dad661b951\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.453306 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64861e3b-016a-459f-b558-f5dad661b951-secret-volume\") pod \"collect-profiles-29332380-hgxnc\" (UID: \"64861e3b-016a-459f-b558-f5dad661b951\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.466128 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 17:00:00 crc kubenswrapper[4624]: E1008 17:00:00.466427 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.555750 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64861e3b-016a-459f-b558-f5dad661b951-secret-volume\") pod \"collect-profiles-29332380-hgxnc\" (UID: \"64861e3b-016a-459f-b558-f5dad661b951\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.555885 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64861e3b-016a-459f-b558-f5dad661b951-config-volume\") pod \"collect-profiles-29332380-hgxnc\" (UID: \"64861e3b-016a-459f-b558-f5dad661b951\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.556022 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpnpw\" (UniqueName: \"kubernetes.io/projected/64861e3b-016a-459f-b558-f5dad661b951-kube-api-access-jpnpw\") pod \"collect-profiles-29332380-hgxnc\" (UID: \"64861e3b-016a-459f-b558-f5dad661b951\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.557756 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64861e3b-016a-459f-b558-f5dad661b951-config-volume\") pod \"collect-profiles-29332380-hgxnc\" (UID: \"64861e3b-016a-459f-b558-f5dad661b951\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.825174 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpnpw\" (UniqueName: \"kubernetes.io/projected/64861e3b-016a-459f-b558-f5dad661b951-kube-api-access-jpnpw\") pod \"collect-profiles-29332380-hgxnc\" (UID: \"64861e3b-016a-459f-b558-f5dad661b951\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.829464 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64861e3b-016a-459f-b558-f5dad661b951-secret-volume\") pod \"collect-profiles-29332380-hgxnc\" (UID: \"64861e3b-016a-459f-b558-f5dad661b951\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc" Oct 08 17:00:00 crc kubenswrapper[4624]: I1008 17:00:00.955333 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc" Oct 08 17:00:01 crc kubenswrapper[4624]: I1008 17:00:01.550759 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc"] Oct 08 17:00:01 crc kubenswrapper[4624]: I1008 17:00:01.744212 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc" event={"ID":"64861e3b-016a-459f-b558-f5dad661b951","Type":"ContainerStarted","Data":"bf25db96612b87e7b9d0f6e68aba2e59afde857161045d15bb59b995ccb2757e"} Oct 08 17:00:01 crc kubenswrapper[4624]: I1008 17:00:01.853317 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xkngr" Oct 08 17:00:01 crc kubenswrapper[4624]: I1008 17:00:01.920413 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xkngr" Oct 08 17:00:02 crc kubenswrapper[4624]: I1008 17:00:02.093869 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xkngr"] Oct 08 17:00:02 crc kubenswrapper[4624]: I1008 17:00:02.757563 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc" event={"ID":"64861e3b-016a-459f-b558-f5dad661b951","Type":"ContainerStarted","Data":"3684859e67624a65d62e4fe87188e3d40538f63514af481b88548142741695e0"} Oct 08 17:00:02 crc kubenswrapper[4624]: I1008 17:00:02.787790 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc" podStartSLOduration=2.787719514 podStartE2EDuration="2.787719514s" podCreationTimestamp="2025-10-08 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 17:00:02.783868235 +0000 UTC m=+9427.934803312" watchObservedRunningTime="2025-10-08 17:00:02.787719514 +0000 UTC m=+9427.938654591" Oct 08 17:00:03 crc kubenswrapper[4624]: I1008 17:00:03.773437 4624 generic.go:334] "Generic (PLEG): container finished" podID="64861e3b-016a-459f-b558-f5dad661b951" containerID="3684859e67624a65d62e4fe87188e3d40538f63514af481b88548142741695e0" exitCode=0 Oct 08 17:00:03 crc kubenswrapper[4624]: I1008 17:00:03.773531 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc" event={"ID":"64861e3b-016a-459f-b558-f5dad661b951","Type":"ContainerDied","Data":"3684859e67624a65d62e4fe87188e3d40538f63514af481b88548142741695e0"} Oct 08 17:00:03 crc kubenswrapper[4624]: I1008 17:00:03.774063 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xkngr" podUID="ec894cdd-6896-4c48-a533-9d57da994554" containerName="registry-server" containerID="cri-o://0b33447982c640b183e7b6be9c6adf39d2417132e0e8589f86967d066b16c470" gracePeriod=2 Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.377580 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkngr" Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.482406 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx96g\" (UniqueName: \"kubernetes.io/projected/ec894cdd-6896-4c48-a533-9d57da994554-kube-api-access-lx96g\") pod \"ec894cdd-6896-4c48-a533-9d57da994554\" (UID: \"ec894cdd-6896-4c48-a533-9d57da994554\") " Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.482503 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec894cdd-6896-4c48-a533-9d57da994554-utilities\") pod \"ec894cdd-6896-4c48-a533-9d57da994554\" (UID: \"ec894cdd-6896-4c48-a533-9d57da994554\") " Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.482547 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec894cdd-6896-4c48-a533-9d57da994554-catalog-content\") pod \"ec894cdd-6896-4c48-a533-9d57da994554\" (UID: \"ec894cdd-6896-4c48-a533-9d57da994554\") " Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.483083 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec894cdd-6896-4c48-a533-9d57da994554-utilities" (OuterVolumeSpecName: "utilities") pod "ec894cdd-6896-4c48-a533-9d57da994554" (UID: "ec894cdd-6896-4c48-a533-9d57da994554"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.493512 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec894cdd-6896-4c48-a533-9d57da994554-kube-api-access-lx96g" (OuterVolumeSpecName: "kube-api-access-lx96g") pod "ec894cdd-6896-4c48-a533-9d57da994554" (UID: "ec894cdd-6896-4c48-a533-9d57da994554"). InnerVolumeSpecName "kube-api-access-lx96g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.533724 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec894cdd-6896-4c48-a533-9d57da994554-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec894cdd-6896-4c48-a533-9d57da994554" (UID: "ec894cdd-6896-4c48-a533-9d57da994554"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.585650 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx96g\" (UniqueName: \"kubernetes.io/projected/ec894cdd-6896-4c48-a533-9d57da994554-kube-api-access-lx96g\") on node \"crc\" DevicePath \"\"" Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.585703 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec894cdd-6896-4c48-a533-9d57da994554-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.585730 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec894cdd-6896-4c48-a533-9d57da994554-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.787503 4624 generic.go:334] "Generic (PLEG): container finished" podID="ec894cdd-6896-4c48-a533-9d57da994554" containerID="0b33447982c640b183e7b6be9c6adf39d2417132e0e8589f86967d066b16c470" exitCode=0 Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.787567 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkngr" event={"ID":"ec894cdd-6896-4c48-a533-9d57da994554","Type":"ContainerDied","Data":"0b33447982c640b183e7b6be9c6adf39d2417132e0e8589f86967d066b16c470"} Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.787675 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkngr" Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.787753 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkngr" event={"ID":"ec894cdd-6896-4c48-a533-9d57da994554","Type":"ContainerDied","Data":"27f5a70f6370127ee79bd3be0b47fc42b4eebea7b25e2a34c49c6be73b9d1d0d"} Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.787780 4624 scope.go:117] "RemoveContainer" containerID="0b33447982c640b183e7b6be9c6adf39d2417132e0e8589f86967d066b16c470" Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.830041 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xkngr"] Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.841208 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xkngr"] Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.852871 4624 scope.go:117] "RemoveContainer" containerID="0b30c7115d354ff42a5bb7c71ed5f9ea299b0b335db8abbefddd29b7a731583e" Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.883520 4624 scope.go:117] "RemoveContainer" containerID="b940560c6cdc9545c2bfe545c260af93680e5520911af819d367ca81ff8f09fb" Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.952117 4624 scope.go:117] "RemoveContainer" containerID="0b33447982c640b183e7b6be9c6adf39d2417132e0e8589f86967d066b16c470" Oct 08 17:00:04 crc kubenswrapper[4624]: E1008 17:00:04.959492 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b33447982c640b183e7b6be9c6adf39d2417132e0e8589f86967d066b16c470\": container with ID starting with 0b33447982c640b183e7b6be9c6adf39d2417132e0e8589f86967d066b16c470 not found: ID does not exist" containerID="0b33447982c640b183e7b6be9c6adf39d2417132e0e8589f86967d066b16c470" Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.959530 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b33447982c640b183e7b6be9c6adf39d2417132e0e8589f86967d066b16c470"} err="failed to get container status \"0b33447982c640b183e7b6be9c6adf39d2417132e0e8589f86967d066b16c470\": rpc error: code = NotFound desc = could not find container \"0b33447982c640b183e7b6be9c6adf39d2417132e0e8589f86967d066b16c470\": container with ID starting with 0b33447982c640b183e7b6be9c6adf39d2417132e0e8589f86967d066b16c470 not found: ID does not exist" Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.959560 4624 scope.go:117] "RemoveContainer" containerID="0b30c7115d354ff42a5bb7c71ed5f9ea299b0b335db8abbefddd29b7a731583e" Oct 08 17:00:04 crc kubenswrapper[4624]: E1008 17:00:04.959905 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b30c7115d354ff42a5bb7c71ed5f9ea299b0b335db8abbefddd29b7a731583e\": container with ID starting with 0b30c7115d354ff42a5bb7c71ed5f9ea299b0b335db8abbefddd29b7a731583e not found: ID does not exist" containerID="0b30c7115d354ff42a5bb7c71ed5f9ea299b0b335db8abbefddd29b7a731583e" Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.959925 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b30c7115d354ff42a5bb7c71ed5f9ea299b0b335db8abbefddd29b7a731583e"} err="failed to get container status \"0b30c7115d354ff42a5bb7c71ed5f9ea299b0b335db8abbefddd29b7a731583e\": rpc error: code = NotFound desc = could not find container \"0b30c7115d354ff42a5bb7c71ed5f9ea299b0b335db8abbefddd29b7a731583e\": container with ID starting with 0b30c7115d354ff42a5bb7c71ed5f9ea299b0b335db8abbefddd29b7a731583e not found: ID does not exist" Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.959939 4624 scope.go:117] "RemoveContainer" containerID="b940560c6cdc9545c2bfe545c260af93680e5520911af819d367ca81ff8f09fb" Oct 08 17:00:04 crc kubenswrapper[4624]: E1008 17:00:04.960389 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b940560c6cdc9545c2bfe545c260af93680e5520911af819d367ca81ff8f09fb\": container with ID starting with b940560c6cdc9545c2bfe545c260af93680e5520911af819d367ca81ff8f09fb not found: ID does not exist" containerID="b940560c6cdc9545c2bfe545c260af93680e5520911af819d367ca81ff8f09fb" Oct 08 17:00:04 crc kubenswrapper[4624]: I1008 17:00:04.960406 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b940560c6cdc9545c2bfe545c260af93680e5520911af819d367ca81ff8f09fb"} err="failed to get container status \"b940560c6cdc9545c2bfe545c260af93680e5520911af819d367ca81ff8f09fb\": rpc error: code = NotFound desc = could not find container \"b940560c6cdc9545c2bfe545c260af93680e5520911af819d367ca81ff8f09fb\": container with ID starting with b940560c6cdc9545c2bfe545c260af93680e5520911af819d367ca81ff8f09fb not found: ID does not exist" Oct 08 17:00:05 crc kubenswrapper[4624]: I1008 17:00:05.251440 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc" Oct 08 17:00:05 crc kubenswrapper[4624]: I1008 17:00:05.402487 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64861e3b-016a-459f-b558-f5dad661b951-secret-volume\") pod \"64861e3b-016a-459f-b558-f5dad661b951\" (UID: \"64861e3b-016a-459f-b558-f5dad661b951\") " Oct 08 17:00:05 crc kubenswrapper[4624]: I1008 17:00:05.402819 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64861e3b-016a-459f-b558-f5dad661b951-config-volume\") pod \"64861e3b-016a-459f-b558-f5dad661b951\" (UID: \"64861e3b-016a-459f-b558-f5dad661b951\") " Oct 08 17:00:05 crc kubenswrapper[4624]: I1008 17:00:05.402892 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpnpw\" (UniqueName: \"kubernetes.io/projected/64861e3b-016a-459f-b558-f5dad661b951-kube-api-access-jpnpw\") pod \"64861e3b-016a-459f-b558-f5dad661b951\" (UID: \"64861e3b-016a-459f-b558-f5dad661b951\") " Oct 08 17:00:05 crc kubenswrapper[4624]: I1008 17:00:05.403998 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64861e3b-016a-459f-b558-f5dad661b951-config-volume" (OuterVolumeSpecName: "config-volume") pod "64861e3b-016a-459f-b558-f5dad661b951" (UID: "64861e3b-016a-459f-b558-f5dad661b951"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 17:00:05 crc kubenswrapper[4624]: I1008 17:00:05.408783 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64861e3b-016a-459f-b558-f5dad661b951-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "64861e3b-016a-459f-b558-f5dad661b951" (UID: "64861e3b-016a-459f-b558-f5dad661b951"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 17:00:05 crc kubenswrapper[4624]: I1008 17:00:05.408808 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64861e3b-016a-459f-b558-f5dad661b951-kube-api-access-jpnpw" (OuterVolumeSpecName: "kube-api-access-jpnpw") pod "64861e3b-016a-459f-b558-f5dad661b951" (UID: "64861e3b-016a-459f-b558-f5dad661b951"). InnerVolumeSpecName "kube-api-access-jpnpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:00:05 crc kubenswrapper[4624]: I1008 17:00:05.479829 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec894cdd-6896-4c48-a533-9d57da994554" path="/var/lib/kubelet/pods/ec894cdd-6896-4c48-a533-9d57da994554/volumes" Oct 08 17:00:05 crc kubenswrapper[4624]: I1008 17:00:05.506169 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64861e3b-016a-459f-b558-f5dad661b951-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 17:00:05 crc kubenswrapper[4624]: I1008 17:00:05.506201 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpnpw\" (UniqueName: \"kubernetes.io/projected/64861e3b-016a-459f-b558-f5dad661b951-kube-api-access-jpnpw\") on node \"crc\" DevicePath \"\"" Oct 08 17:00:05 crc kubenswrapper[4624]: I1008 17:00:05.506216 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64861e3b-016a-459f-b558-f5dad661b951-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 17:00:05 crc kubenswrapper[4624]: I1008 17:00:05.797796 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc" event={"ID":"64861e3b-016a-459f-b558-f5dad661b951","Type":"ContainerDied","Data":"bf25db96612b87e7b9d0f6e68aba2e59afde857161045d15bb59b995ccb2757e"} Oct 08 17:00:05 crc kubenswrapper[4624]: I1008 17:00:05.798150 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332380-hgxnc" Oct 08 17:00:05 crc kubenswrapper[4624]: I1008 17:00:05.797987 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf25db96612b87e7b9d0f6e68aba2e59afde857161045d15bb59b995ccb2757e" Oct 08 17:00:05 crc kubenswrapper[4624]: I1008 17:00:05.899125 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz"] Oct 08 17:00:05 crc kubenswrapper[4624]: I1008 17:00:05.907284 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332335-k8hxz"] Oct 08 17:00:07 crc kubenswrapper[4624]: I1008 17:00:07.482103 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3d66eab-4886-4f59-aacb-c126f01f0f05" path="/var/lib/kubelet/pods/c3d66eab-4886-4f59-aacb-c126f01f0f05/volumes" Oct 08 17:00:13 crc kubenswrapper[4624]: I1008 17:00:13.467150 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 17:00:13 crc kubenswrapper[4624]: E1008 17:00:13.467950 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:00:24 crc kubenswrapper[4624]: I1008 17:00:24.466181 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 17:00:24 crc kubenswrapper[4624]: E1008 17:00:24.466895 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:00:36 crc kubenswrapper[4624]: I1008 17:00:36.466251 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 17:00:36 crc kubenswrapper[4624]: E1008 17:00:36.467137 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:00:47 crc kubenswrapper[4624]: I1008 17:00:47.483754 4624 scope.go:117] "RemoveContainer" containerID="3879c7cdb413a19a7e85cdda953794ce55db87e863b96b79aead34aba9174f2b" Oct 08 17:00:48 crc kubenswrapper[4624]: I1008 17:00:48.466113 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 17:00:48 crc kubenswrapper[4624]: E1008 17:00:48.466698 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.163185 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29332381-bs9mx"] Oct 08 17:01:00 crc kubenswrapper[4624]: E1008 17:01:00.164697 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec894cdd-6896-4c48-a533-9d57da994554" containerName="registry-server" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.164718 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec894cdd-6896-4c48-a533-9d57da994554" containerName="registry-server" Oct 08 17:01:00 crc kubenswrapper[4624]: E1008 17:01:00.164764 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec894cdd-6896-4c48-a533-9d57da994554" containerName="extract-utilities" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.164772 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec894cdd-6896-4c48-a533-9d57da994554" containerName="extract-utilities" Oct 08 17:01:00 crc kubenswrapper[4624]: E1008 17:01:00.164798 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec894cdd-6896-4c48-a533-9d57da994554" containerName="extract-content" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.164806 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec894cdd-6896-4c48-a533-9d57da994554" containerName="extract-content" Oct 08 17:01:00 crc kubenswrapper[4624]: E1008 17:01:00.164826 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64861e3b-016a-459f-b558-f5dad661b951" containerName="collect-profiles" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.164833 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="64861e3b-016a-459f-b558-f5dad661b951" containerName="collect-profiles" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.165076 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="64861e3b-016a-459f-b558-f5dad661b951" containerName="collect-profiles" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.165102 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec894cdd-6896-4c48-a533-9d57da994554" containerName="registry-server" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.166022 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29332381-bs9mx" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.173838 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29332381-bs9mx"] Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.259189 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b977380c-013b-4784-81b2-1387f688506c-fernet-keys\") pod \"keystone-cron-29332381-bs9mx\" (UID: \"b977380c-013b-4784-81b2-1387f688506c\") " pod="openstack/keystone-cron-29332381-bs9mx" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.259239 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w6cc\" (UniqueName: \"kubernetes.io/projected/b977380c-013b-4784-81b2-1387f688506c-kube-api-access-5w6cc\") pod \"keystone-cron-29332381-bs9mx\" (UID: \"b977380c-013b-4784-81b2-1387f688506c\") " pod="openstack/keystone-cron-29332381-bs9mx" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.259324 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b977380c-013b-4784-81b2-1387f688506c-config-data\") pod \"keystone-cron-29332381-bs9mx\" (UID: \"b977380c-013b-4784-81b2-1387f688506c\") " pod="openstack/keystone-cron-29332381-bs9mx" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.259422 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b977380c-013b-4784-81b2-1387f688506c-combined-ca-bundle\") pod \"keystone-cron-29332381-bs9mx\" (UID: \"b977380c-013b-4784-81b2-1387f688506c\") " pod="openstack/keystone-cron-29332381-bs9mx" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.361441 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b977380c-013b-4784-81b2-1387f688506c-combined-ca-bundle\") pod \"keystone-cron-29332381-bs9mx\" (UID: \"b977380c-013b-4784-81b2-1387f688506c\") " pod="openstack/keystone-cron-29332381-bs9mx" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.361580 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b977380c-013b-4784-81b2-1387f688506c-fernet-keys\") pod \"keystone-cron-29332381-bs9mx\" (UID: \"b977380c-013b-4784-81b2-1387f688506c\") " pod="openstack/keystone-cron-29332381-bs9mx" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.361605 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w6cc\" (UniqueName: \"kubernetes.io/projected/b977380c-013b-4784-81b2-1387f688506c-kube-api-access-5w6cc\") pod \"keystone-cron-29332381-bs9mx\" (UID: \"b977380c-013b-4784-81b2-1387f688506c\") " pod="openstack/keystone-cron-29332381-bs9mx" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.361685 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b977380c-013b-4784-81b2-1387f688506c-config-data\") pod \"keystone-cron-29332381-bs9mx\" (UID: \"b977380c-013b-4784-81b2-1387f688506c\") " pod="openstack/keystone-cron-29332381-bs9mx" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.372791 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b977380c-013b-4784-81b2-1387f688506c-fernet-keys\") pod \"keystone-cron-29332381-bs9mx\" (UID: \"b977380c-013b-4784-81b2-1387f688506c\") " pod="openstack/keystone-cron-29332381-bs9mx" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.373238 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b977380c-013b-4784-81b2-1387f688506c-config-data\") pod \"keystone-cron-29332381-bs9mx\" (UID: \"b977380c-013b-4784-81b2-1387f688506c\") " pod="openstack/keystone-cron-29332381-bs9mx" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.374699 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b977380c-013b-4784-81b2-1387f688506c-combined-ca-bundle\") pod \"keystone-cron-29332381-bs9mx\" (UID: \"b977380c-013b-4784-81b2-1387f688506c\") " pod="openstack/keystone-cron-29332381-bs9mx" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.384169 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w6cc\" (UniqueName: \"kubernetes.io/projected/b977380c-013b-4784-81b2-1387f688506c-kube-api-access-5w6cc\") pod \"keystone-cron-29332381-bs9mx\" (UID: \"b977380c-013b-4784-81b2-1387f688506c\") " pod="openstack/keystone-cron-29332381-bs9mx" Oct 08 17:01:00 crc kubenswrapper[4624]: I1008 17:01:00.491146 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29332381-bs9mx" Oct 08 17:01:01 crc kubenswrapper[4624]: I1008 17:01:01.033813 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29332381-bs9mx"] Oct 08 17:01:01 crc kubenswrapper[4624]: I1008 17:01:01.365667 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29332381-bs9mx" event={"ID":"b977380c-013b-4784-81b2-1387f688506c","Type":"ContainerStarted","Data":"e9626fb7eee8f97b67f038475a448f46c725dab3de6161321f74b8fad0bbf72e"} Oct 08 17:01:01 crc kubenswrapper[4624]: I1008 17:01:01.366031 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29332381-bs9mx" event={"ID":"b977380c-013b-4784-81b2-1387f688506c","Type":"ContainerStarted","Data":"f2db22a880dfcee24a06a7daebe4233d43b8d492c03d163504ee13aabdfcb096"} Oct 08 17:01:01 crc kubenswrapper[4624]: I1008 17:01:01.387958 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29332381-bs9mx" podStartSLOduration=1.387937803 podStartE2EDuration="1.387937803s" podCreationTimestamp="2025-10-08 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 17:01:01.386009774 +0000 UTC m=+9486.536944851" watchObservedRunningTime="2025-10-08 17:01:01.387937803 +0000 UTC m=+9486.538872880" Oct 08 17:01:02 crc kubenswrapper[4624]: I1008 17:01:02.466351 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 17:01:03 crc kubenswrapper[4624]: I1008 17:01:03.408436 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"3a9cb310f1879f8063a35f50330f934d2b43b308f1d24b8f44d7b0d927d2b709"} Oct 08 17:01:06 crc kubenswrapper[4624]: I1008 17:01:06.439452 4624 generic.go:334] "Generic (PLEG): container finished" podID="b977380c-013b-4784-81b2-1387f688506c" containerID="e9626fb7eee8f97b67f038475a448f46c725dab3de6161321f74b8fad0bbf72e" exitCode=0 Oct 08 17:01:06 crc kubenswrapper[4624]: I1008 17:01:06.439967 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29332381-bs9mx" event={"ID":"b977380c-013b-4784-81b2-1387f688506c","Type":"ContainerDied","Data":"e9626fb7eee8f97b67f038475a448f46c725dab3de6161321f74b8fad0bbf72e"} Oct 08 17:01:07 crc kubenswrapper[4624]: I1008 17:01:07.980161 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29332381-bs9mx" Oct 08 17:01:08 crc kubenswrapper[4624]: I1008 17:01:08.080335 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b977380c-013b-4784-81b2-1387f688506c-fernet-keys\") pod \"b977380c-013b-4784-81b2-1387f688506c\" (UID: \"b977380c-013b-4784-81b2-1387f688506c\") " Oct 08 17:01:08 crc kubenswrapper[4624]: I1008 17:01:08.080693 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b977380c-013b-4784-81b2-1387f688506c-config-data\") pod \"b977380c-013b-4784-81b2-1387f688506c\" (UID: \"b977380c-013b-4784-81b2-1387f688506c\") " Oct 08 17:01:08 crc kubenswrapper[4624]: I1008 17:01:08.081004 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w6cc\" (UniqueName: \"kubernetes.io/projected/b977380c-013b-4784-81b2-1387f688506c-kube-api-access-5w6cc\") pod \"b977380c-013b-4784-81b2-1387f688506c\" (UID: \"b977380c-013b-4784-81b2-1387f688506c\") " Oct 08 17:01:08 crc kubenswrapper[4624]: I1008 17:01:08.081104 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b977380c-013b-4784-81b2-1387f688506c-combined-ca-bundle\") pod \"b977380c-013b-4784-81b2-1387f688506c\" (UID: \"b977380c-013b-4784-81b2-1387f688506c\") " Oct 08 17:01:08 crc kubenswrapper[4624]: I1008 17:01:08.104581 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b977380c-013b-4784-81b2-1387f688506c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b977380c-013b-4784-81b2-1387f688506c" (UID: "b977380c-013b-4784-81b2-1387f688506c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 17:01:08 crc kubenswrapper[4624]: I1008 17:01:08.109878 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b977380c-013b-4784-81b2-1387f688506c-kube-api-access-5w6cc" (OuterVolumeSpecName: "kube-api-access-5w6cc") pod "b977380c-013b-4784-81b2-1387f688506c" (UID: "b977380c-013b-4784-81b2-1387f688506c"). InnerVolumeSpecName "kube-api-access-5w6cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:01:08 crc kubenswrapper[4624]: I1008 17:01:08.132437 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b977380c-013b-4784-81b2-1387f688506c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b977380c-013b-4784-81b2-1387f688506c" (UID: "b977380c-013b-4784-81b2-1387f688506c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 17:01:08 crc kubenswrapper[4624]: I1008 17:01:08.159565 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b977380c-013b-4784-81b2-1387f688506c-config-data" (OuterVolumeSpecName: "config-data") pod "b977380c-013b-4784-81b2-1387f688506c" (UID: "b977380c-013b-4784-81b2-1387f688506c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 17:01:08 crc kubenswrapper[4624]: I1008 17:01:08.184424 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b977380c-013b-4784-81b2-1387f688506c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 17:01:08 crc kubenswrapper[4624]: I1008 17:01:08.184471 4624 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b977380c-013b-4784-81b2-1387f688506c-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 08 17:01:08 crc kubenswrapper[4624]: I1008 17:01:08.184483 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b977380c-013b-4784-81b2-1387f688506c-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 17:01:08 crc kubenswrapper[4624]: I1008 17:01:08.184498 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w6cc\" (UniqueName: \"kubernetes.io/projected/b977380c-013b-4784-81b2-1387f688506c-kube-api-access-5w6cc\") on node \"crc\" DevicePath \"\"" Oct 08 17:01:08 crc kubenswrapper[4624]: I1008 17:01:08.463676 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29332381-bs9mx" event={"ID":"b977380c-013b-4784-81b2-1387f688506c","Type":"ContainerDied","Data":"f2db22a880dfcee24a06a7daebe4233d43b8d492c03d163504ee13aabdfcb096"} Oct 08 17:01:08 crc kubenswrapper[4624]: I1008 17:01:08.463744 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2db22a880dfcee24a06a7daebe4233d43b8d492c03d163504ee13aabdfcb096" Oct 08 17:01:08 crc kubenswrapper[4624]: I1008 17:01:08.463762 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29332381-bs9mx" Oct 08 17:03:30 crc kubenswrapper[4624]: I1008 17:03:30.076256 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:03:30 crc kubenswrapper[4624]: I1008 17:03:30.076979 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:04:00 crc kubenswrapper[4624]: I1008 17:04:00.076347 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:04:00 crc kubenswrapper[4624]: I1008 17:04:00.077058 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:04:16 crc kubenswrapper[4624]: I1008 17:04:16.992203 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-59d86bf959-vq2ld" podUID="d7c08e42-5aca-4394-952c-5649ba096a8f" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Oct 08 17:04:30 crc kubenswrapper[4624]: I1008 17:04:30.076541 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:04:30 crc kubenswrapper[4624]: I1008 17:04:30.077108 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:04:30 crc kubenswrapper[4624]: I1008 17:04:30.077205 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 17:04:30 crc kubenswrapper[4624]: I1008 17:04:30.078126 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a9cb310f1879f8063a35f50330f934d2b43b308f1d24b8f44d7b0d927d2b709"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 17:04:30 crc kubenswrapper[4624]: I1008 17:04:30.078184 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://3a9cb310f1879f8063a35f50330f934d2b43b308f1d24b8f44d7b0d927d2b709" gracePeriod=600 Oct 08 17:04:30 crc kubenswrapper[4624]: I1008 17:04:30.591891 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="3a9cb310f1879f8063a35f50330f934d2b43b308f1d24b8f44d7b0d927d2b709" exitCode=0 Oct 08 17:04:30 crc kubenswrapper[4624]: I1008 17:04:30.591977 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"3a9cb310f1879f8063a35f50330f934d2b43b308f1d24b8f44d7b0d927d2b709"} Oct 08 17:04:30 crc kubenswrapper[4624]: I1008 17:04:30.592251 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2"} Oct 08 17:04:30 crc kubenswrapper[4624]: I1008 17:04:30.592275 4624 scope.go:117] "RemoveContainer" containerID="a3632ebdc9b7009845e9c5b7b6b02ef6bc3941025ec95918b542b8d25d7dae1f" Oct 08 17:04:59 crc kubenswrapper[4624]: I1008 17:04:59.569279 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g5qzm"] Oct 08 17:04:59 crc kubenswrapper[4624]: E1008 17:04:59.570153 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b977380c-013b-4784-81b2-1387f688506c" containerName="keystone-cron" Oct 08 17:04:59 crc kubenswrapper[4624]: I1008 17:04:59.570166 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="b977380c-013b-4784-81b2-1387f688506c" containerName="keystone-cron" Oct 08 17:04:59 crc kubenswrapper[4624]: I1008 17:04:59.570374 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="b977380c-013b-4784-81b2-1387f688506c" containerName="keystone-cron" Oct 08 17:04:59 crc kubenswrapper[4624]: I1008 17:04:59.571836 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5qzm" Oct 08 17:04:59 crc kubenswrapper[4624]: I1008 17:04:59.584531 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g5qzm"] Oct 08 17:04:59 crc kubenswrapper[4624]: I1008 17:04:59.645843 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b383164-74c2-4f02-ba23-e585e0fc57bd-utilities\") pod \"community-operators-g5qzm\" (UID: \"8b383164-74c2-4f02-ba23-e585e0fc57bd\") " pod="openshift-marketplace/community-operators-g5qzm" Oct 08 17:04:59 crc kubenswrapper[4624]: I1008 17:04:59.646155 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b383164-74c2-4f02-ba23-e585e0fc57bd-catalog-content\") pod \"community-operators-g5qzm\" (UID: \"8b383164-74c2-4f02-ba23-e585e0fc57bd\") " pod="openshift-marketplace/community-operators-g5qzm" Oct 08 17:04:59 crc kubenswrapper[4624]: I1008 17:04:59.646419 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd6p9\" (UniqueName: \"kubernetes.io/projected/8b383164-74c2-4f02-ba23-e585e0fc57bd-kube-api-access-pd6p9\") pod \"community-operators-g5qzm\" (UID: \"8b383164-74c2-4f02-ba23-e585e0fc57bd\") " pod="openshift-marketplace/community-operators-g5qzm" Oct 08 17:04:59 crc kubenswrapper[4624]: I1008 17:04:59.748597 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b383164-74c2-4f02-ba23-e585e0fc57bd-utilities\") pod \"community-operators-g5qzm\" (UID: \"8b383164-74c2-4f02-ba23-e585e0fc57bd\") " pod="openshift-marketplace/community-operators-g5qzm" Oct 08 17:04:59 crc kubenswrapper[4624]: I1008 17:04:59.748688 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b383164-74c2-4f02-ba23-e585e0fc57bd-catalog-content\") pod \"community-operators-g5qzm\" (UID: \"8b383164-74c2-4f02-ba23-e585e0fc57bd\") " pod="openshift-marketplace/community-operators-g5qzm" Oct 08 17:04:59 crc kubenswrapper[4624]: I1008 17:04:59.748889 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd6p9\" (UniqueName: \"kubernetes.io/projected/8b383164-74c2-4f02-ba23-e585e0fc57bd-kube-api-access-pd6p9\") pod \"community-operators-g5qzm\" (UID: \"8b383164-74c2-4f02-ba23-e585e0fc57bd\") " pod="openshift-marketplace/community-operators-g5qzm" Oct 08 17:04:59 crc kubenswrapper[4624]: I1008 17:04:59.749238 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b383164-74c2-4f02-ba23-e585e0fc57bd-utilities\") pod \"community-operators-g5qzm\" (UID: \"8b383164-74c2-4f02-ba23-e585e0fc57bd\") " pod="openshift-marketplace/community-operators-g5qzm" Oct 08 17:04:59 crc kubenswrapper[4624]: I1008 17:04:59.749841 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b383164-74c2-4f02-ba23-e585e0fc57bd-catalog-content\") pod \"community-operators-g5qzm\" (UID: \"8b383164-74c2-4f02-ba23-e585e0fc57bd\") " pod="openshift-marketplace/community-operators-g5qzm" Oct 08 17:04:59 crc kubenswrapper[4624]: I1008 17:04:59.771994 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd6p9\" (UniqueName: \"kubernetes.io/projected/8b383164-74c2-4f02-ba23-e585e0fc57bd-kube-api-access-pd6p9\") pod \"community-operators-g5qzm\" (UID: \"8b383164-74c2-4f02-ba23-e585e0fc57bd\") " pod="openshift-marketplace/community-operators-g5qzm" Oct 08 17:04:59 crc kubenswrapper[4624]: I1008 17:04:59.897015 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5qzm" Oct 08 17:05:00 crc kubenswrapper[4624]: I1008 17:05:00.463414 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g5qzm"] Oct 08 17:05:00 crc kubenswrapper[4624]: I1008 17:05:00.968072 4624 generic.go:334] "Generic (PLEG): container finished" podID="8b383164-74c2-4f02-ba23-e585e0fc57bd" containerID="d2d9d505b0a3af0358cb97c8a0f6ddbe011b74226a136d943b5ec775091f5b90" exitCode=0 Oct 08 17:05:00 crc kubenswrapper[4624]: I1008 17:05:00.968177 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5qzm" event={"ID":"8b383164-74c2-4f02-ba23-e585e0fc57bd","Type":"ContainerDied","Data":"d2d9d505b0a3af0358cb97c8a0f6ddbe011b74226a136d943b5ec775091f5b90"} Oct 08 17:05:00 crc kubenswrapper[4624]: I1008 17:05:00.968364 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5qzm" event={"ID":"8b383164-74c2-4f02-ba23-e585e0fc57bd","Type":"ContainerStarted","Data":"fde163a0e0f7bde0ed1e7eb886632518db8812db731fd60b8029179301cc4de4"} Oct 08 17:05:00 crc kubenswrapper[4624]: I1008 17:05:00.971549 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 17:05:01 crc kubenswrapper[4624]: I1008 17:05:01.980352 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5qzm" event={"ID":"8b383164-74c2-4f02-ba23-e585e0fc57bd","Type":"ContainerStarted","Data":"f9089e03b486bff48eff254047aeedf38feb5a20d59fce5be5abb53e36dfb615"} Oct 08 17:05:04 crc kubenswrapper[4624]: I1008 17:05:04.004706 4624 generic.go:334] "Generic (PLEG): container finished" podID="8b383164-74c2-4f02-ba23-e585e0fc57bd" containerID="f9089e03b486bff48eff254047aeedf38feb5a20d59fce5be5abb53e36dfb615" exitCode=0 Oct 08 17:05:04 crc kubenswrapper[4624]: I1008 17:05:04.004778 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5qzm" event={"ID":"8b383164-74c2-4f02-ba23-e585e0fc57bd","Type":"ContainerDied","Data":"f9089e03b486bff48eff254047aeedf38feb5a20d59fce5be5abb53e36dfb615"} Oct 08 17:05:05 crc kubenswrapper[4624]: I1008 17:05:05.017621 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5qzm" event={"ID":"8b383164-74c2-4f02-ba23-e585e0fc57bd","Type":"ContainerStarted","Data":"f13d2a237530962a8585bb51a04a61e523b66be86a9ef835959b81d00bb7d12d"} Oct 08 17:05:05 crc kubenswrapper[4624]: I1008 17:05:05.068756 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g5qzm" podStartSLOduration=2.589992224 podStartE2EDuration="6.068728711s" podCreationTimestamp="2025-10-08 17:04:59 +0000 UTC" firstStartedPulling="2025-10-08 17:05:00.970070017 +0000 UTC m=+9726.121005094" lastFinishedPulling="2025-10-08 17:05:04.448806504 +0000 UTC m=+9729.599741581" observedRunningTime="2025-10-08 17:05:05.067414818 +0000 UTC m=+9730.218349915" watchObservedRunningTime="2025-10-08 17:05:05.068728711 +0000 UTC m=+9730.219663788" Oct 08 17:05:09 crc kubenswrapper[4624]: I1008 17:05:09.897242 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g5qzm" Oct 08 17:05:09 crc kubenswrapper[4624]: I1008 17:05:09.897852 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g5qzm" Oct 08 17:05:09 crc kubenswrapper[4624]: I1008 17:05:09.955629 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g5qzm" Oct 08 17:05:10 crc kubenswrapper[4624]: I1008 17:05:10.127311 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g5qzm" Oct 08 17:05:10 crc kubenswrapper[4624]: I1008 17:05:10.195477 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g5qzm"] Oct 08 17:05:12 crc kubenswrapper[4624]: I1008 17:05:12.086243 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g5qzm" podUID="8b383164-74c2-4f02-ba23-e585e0fc57bd" containerName="registry-server" containerID="cri-o://f13d2a237530962a8585bb51a04a61e523b66be86a9ef835959b81d00bb7d12d" gracePeriod=2 Oct 08 17:05:12 crc kubenswrapper[4624]: I1008 17:05:12.770944 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5qzm" Oct 08 17:05:12 crc kubenswrapper[4624]: I1008 17:05:12.861541 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd6p9\" (UniqueName: \"kubernetes.io/projected/8b383164-74c2-4f02-ba23-e585e0fc57bd-kube-api-access-pd6p9\") pod \"8b383164-74c2-4f02-ba23-e585e0fc57bd\" (UID: \"8b383164-74c2-4f02-ba23-e585e0fc57bd\") " Oct 08 17:05:12 crc kubenswrapper[4624]: I1008 17:05:12.861629 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b383164-74c2-4f02-ba23-e585e0fc57bd-utilities\") pod \"8b383164-74c2-4f02-ba23-e585e0fc57bd\" (UID: \"8b383164-74c2-4f02-ba23-e585e0fc57bd\") " Oct 08 17:05:12 crc kubenswrapper[4624]: I1008 17:05:12.861716 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b383164-74c2-4f02-ba23-e585e0fc57bd-catalog-content\") pod \"8b383164-74c2-4f02-ba23-e585e0fc57bd\" (UID: \"8b383164-74c2-4f02-ba23-e585e0fc57bd\") " Oct 08 17:05:12 crc kubenswrapper[4624]: I1008 17:05:12.862722 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b383164-74c2-4f02-ba23-e585e0fc57bd-utilities" (OuterVolumeSpecName: "utilities") pod "8b383164-74c2-4f02-ba23-e585e0fc57bd" (UID: "8b383164-74c2-4f02-ba23-e585e0fc57bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:05:12 crc kubenswrapper[4624]: I1008 17:05:12.875906 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b383164-74c2-4f02-ba23-e585e0fc57bd-kube-api-access-pd6p9" (OuterVolumeSpecName: "kube-api-access-pd6p9") pod "8b383164-74c2-4f02-ba23-e585e0fc57bd" (UID: "8b383164-74c2-4f02-ba23-e585e0fc57bd"). InnerVolumeSpecName "kube-api-access-pd6p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:05:12 crc kubenswrapper[4624]: I1008 17:05:12.919842 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b383164-74c2-4f02-ba23-e585e0fc57bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b383164-74c2-4f02-ba23-e585e0fc57bd" (UID: "8b383164-74c2-4f02-ba23-e585e0fc57bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:05:12 crc kubenswrapper[4624]: I1008 17:05:12.963323 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd6p9\" (UniqueName: \"kubernetes.io/projected/8b383164-74c2-4f02-ba23-e585e0fc57bd-kube-api-access-pd6p9\") on node \"crc\" DevicePath \"\"" Oct 08 17:05:12 crc kubenswrapper[4624]: I1008 17:05:12.963362 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b383164-74c2-4f02-ba23-e585e0fc57bd-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 17:05:12 crc kubenswrapper[4624]: I1008 17:05:12.963375 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b383164-74c2-4f02-ba23-e585e0fc57bd-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 17:05:13 crc kubenswrapper[4624]: I1008 17:05:13.095564 4624 generic.go:334] "Generic (PLEG): container finished" podID="8b383164-74c2-4f02-ba23-e585e0fc57bd" containerID="f13d2a237530962a8585bb51a04a61e523b66be86a9ef835959b81d00bb7d12d" exitCode=0 Oct 08 17:05:13 crc kubenswrapper[4624]: I1008 17:05:13.095626 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5qzm" event={"ID":"8b383164-74c2-4f02-ba23-e585e0fc57bd","Type":"ContainerDied","Data":"f13d2a237530962a8585bb51a04a61e523b66be86a9ef835959b81d00bb7d12d"} Oct 08 17:05:13 crc kubenswrapper[4624]: I1008 17:05:13.095658 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5qzm" Oct 08 17:05:13 crc kubenswrapper[4624]: I1008 17:05:13.095682 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5qzm" event={"ID":"8b383164-74c2-4f02-ba23-e585e0fc57bd","Type":"ContainerDied","Data":"fde163a0e0f7bde0ed1e7eb886632518db8812db731fd60b8029179301cc4de4"} Oct 08 17:05:13 crc kubenswrapper[4624]: I1008 17:05:13.095709 4624 scope.go:117] "RemoveContainer" containerID="f13d2a237530962a8585bb51a04a61e523b66be86a9ef835959b81d00bb7d12d" Oct 08 17:05:13 crc kubenswrapper[4624]: I1008 17:05:13.125118 4624 scope.go:117] "RemoveContainer" containerID="f9089e03b486bff48eff254047aeedf38feb5a20d59fce5be5abb53e36dfb615" Oct 08 17:05:13 crc kubenswrapper[4624]: I1008 17:05:13.133084 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g5qzm"] Oct 08 17:05:13 crc kubenswrapper[4624]: I1008 17:05:13.148272 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g5qzm"] Oct 08 17:05:13 crc kubenswrapper[4624]: I1008 17:05:13.167007 4624 scope.go:117] "RemoveContainer" containerID="d2d9d505b0a3af0358cb97c8a0f6ddbe011b74226a136d943b5ec775091f5b90" Oct 08 17:05:13 crc kubenswrapper[4624]: I1008 17:05:13.206594 4624 scope.go:117] "RemoveContainer" containerID="f13d2a237530962a8585bb51a04a61e523b66be86a9ef835959b81d00bb7d12d" Oct 08 17:05:13 crc kubenswrapper[4624]: E1008 17:05:13.207526 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f13d2a237530962a8585bb51a04a61e523b66be86a9ef835959b81d00bb7d12d\": container with ID starting with f13d2a237530962a8585bb51a04a61e523b66be86a9ef835959b81d00bb7d12d not found: ID does not exist" containerID="f13d2a237530962a8585bb51a04a61e523b66be86a9ef835959b81d00bb7d12d" Oct 08 17:05:13 crc kubenswrapper[4624]: I1008 17:05:13.207583 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f13d2a237530962a8585bb51a04a61e523b66be86a9ef835959b81d00bb7d12d"} err="failed to get container status \"f13d2a237530962a8585bb51a04a61e523b66be86a9ef835959b81d00bb7d12d\": rpc error: code = NotFound desc = could not find container \"f13d2a237530962a8585bb51a04a61e523b66be86a9ef835959b81d00bb7d12d\": container with ID starting with f13d2a237530962a8585bb51a04a61e523b66be86a9ef835959b81d00bb7d12d not found: ID does not exist" Oct 08 17:05:13 crc kubenswrapper[4624]: I1008 17:05:13.207618 4624 scope.go:117] "RemoveContainer" containerID="f9089e03b486bff48eff254047aeedf38feb5a20d59fce5be5abb53e36dfb615" Oct 08 17:05:13 crc kubenswrapper[4624]: E1008 17:05:13.208028 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9089e03b486bff48eff254047aeedf38feb5a20d59fce5be5abb53e36dfb615\": container with ID starting with f9089e03b486bff48eff254047aeedf38feb5a20d59fce5be5abb53e36dfb615 not found: ID does not exist" containerID="f9089e03b486bff48eff254047aeedf38feb5a20d59fce5be5abb53e36dfb615" Oct 08 17:05:13 crc kubenswrapper[4624]: I1008 17:05:13.208113 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9089e03b486bff48eff254047aeedf38feb5a20d59fce5be5abb53e36dfb615"} err="failed to get container status \"f9089e03b486bff48eff254047aeedf38feb5a20d59fce5be5abb53e36dfb615\": rpc error: code = NotFound desc = could not find container \"f9089e03b486bff48eff254047aeedf38feb5a20d59fce5be5abb53e36dfb615\": container with ID starting with f9089e03b486bff48eff254047aeedf38feb5a20d59fce5be5abb53e36dfb615 not found: ID does not exist" Oct 08 17:05:13 crc kubenswrapper[4624]: I1008 17:05:13.208183 4624 scope.go:117] "RemoveContainer" containerID="d2d9d505b0a3af0358cb97c8a0f6ddbe011b74226a136d943b5ec775091f5b90" Oct 08 17:05:13 crc kubenswrapper[4624]: E1008 17:05:13.208431 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2d9d505b0a3af0358cb97c8a0f6ddbe011b74226a136d943b5ec775091f5b90\": container with ID starting with d2d9d505b0a3af0358cb97c8a0f6ddbe011b74226a136d943b5ec775091f5b90 not found: ID does not exist" containerID="d2d9d505b0a3af0358cb97c8a0f6ddbe011b74226a136d943b5ec775091f5b90" Oct 08 17:05:13 crc kubenswrapper[4624]: I1008 17:05:13.208460 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2d9d505b0a3af0358cb97c8a0f6ddbe011b74226a136d943b5ec775091f5b90"} err="failed to get container status \"d2d9d505b0a3af0358cb97c8a0f6ddbe011b74226a136d943b5ec775091f5b90\": rpc error: code = NotFound desc = could not find container \"d2d9d505b0a3af0358cb97c8a0f6ddbe011b74226a136d943b5ec775091f5b90\": container with ID starting with d2d9d505b0a3af0358cb97c8a0f6ddbe011b74226a136d943b5ec775091f5b90 not found: ID does not exist" Oct 08 17:05:13 crc kubenswrapper[4624]: I1008 17:05:13.477781 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b383164-74c2-4f02-ba23-e585e0fc57bd" path="/var/lib/kubelet/pods/8b383164-74c2-4f02-ba23-e585e0fc57bd/volumes" Oct 08 17:06:30 crc kubenswrapper[4624]: I1008 17:06:30.076696 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:06:30 crc kubenswrapper[4624]: I1008 17:06:30.077814 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:07:00 crc kubenswrapper[4624]: I1008 17:07:00.077047 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:07:00 crc kubenswrapper[4624]: I1008 17:07:00.077870 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:07:30 crc kubenswrapper[4624]: I1008 17:07:30.076996 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:07:30 crc kubenswrapper[4624]: I1008 17:07:30.077594 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:07:30 crc kubenswrapper[4624]: I1008 17:07:30.077696 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 17:07:30 crc kubenswrapper[4624]: I1008 17:07:30.078471 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 17:07:30 crc kubenswrapper[4624]: I1008 17:07:30.078548 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" gracePeriod=600 Oct 08 17:07:30 crc kubenswrapper[4624]: E1008 17:07:30.208137 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:07:30 crc kubenswrapper[4624]: I1008 17:07:30.536629 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" exitCode=0 Oct 08 17:07:30 crc kubenswrapper[4624]: I1008 17:07:30.536677 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2"} Oct 08 17:07:30 crc kubenswrapper[4624]: I1008 17:07:30.536755 4624 scope.go:117] "RemoveContainer" containerID="3a9cb310f1879f8063a35f50330f934d2b43b308f1d24b8f44d7b0d927d2b709" Oct 08 17:07:30 crc kubenswrapper[4624]: I1008 17:07:30.537536 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:07:30 crc kubenswrapper[4624]: E1008 17:07:30.537950 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:07:43 crc kubenswrapper[4624]: I1008 17:07:43.465881 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:07:43 crc kubenswrapper[4624]: E1008 17:07:43.466625 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:07:57 crc kubenswrapper[4624]: I1008 17:07:57.466918 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:07:57 crc kubenswrapper[4624]: E1008 17:07:57.468131 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:08:08 crc kubenswrapper[4624]: I1008 17:08:08.466520 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:08:08 crc kubenswrapper[4624]: E1008 17:08:08.467343 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:08:20 crc kubenswrapper[4624]: I1008 17:08:20.467024 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:08:20 crc kubenswrapper[4624]: E1008 17:08:20.469857 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:08:35 crc kubenswrapper[4624]: I1008 17:08:35.492359 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:08:35 crc kubenswrapper[4624]: E1008 17:08:35.497945 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:08:49 crc kubenswrapper[4624]: I1008 17:08:49.466335 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:08:49 crc kubenswrapper[4624]: E1008 17:08:49.469626 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:09:00 crc kubenswrapper[4624]: I1008 17:09:00.466334 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:09:00 crc kubenswrapper[4624]: E1008 17:09:00.467696 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:09:15 crc kubenswrapper[4624]: I1008 17:09:15.473351 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:09:15 crc kubenswrapper[4624]: E1008 17:09:15.474160 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:09:29 crc kubenswrapper[4624]: I1008 17:09:29.467046 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:09:29 crc kubenswrapper[4624]: E1008 17:09:29.467834 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:09:33 crc kubenswrapper[4624]: I1008 17:09:33.891591 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nc2pt"] Oct 08 17:09:33 crc kubenswrapper[4624]: E1008 17:09:33.892628 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b383164-74c2-4f02-ba23-e585e0fc57bd" containerName="extract-content" Oct 08 17:09:33 crc kubenswrapper[4624]: I1008 17:09:33.892668 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b383164-74c2-4f02-ba23-e585e0fc57bd" containerName="extract-content" Oct 08 17:09:33 crc kubenswrapper[4624]: E1008 17:09:33.892726 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b383164-74c2-4f02-ba23-e585e0fc57bd" containerName="extract-utilities" Oct 08 17:09:33 crc kubenswrapper[4624]: I1008 17:09:33.892734 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b383164-74c2-4f02-ba23-e585e0fc57bd" containerName="extract-utilities" Oct 08 17:09:33 crc kubenswrapper[4624]: E1008 17:09:33.892742 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b383164-74c2-4f02-ba23-e585e0fc57bd" containerName="registry-server" Oct 08 17:09:33 crc kubenswrapper[4624]: I1008 17:09:33.892748 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b383164-74c2-4f02-ba23-e585e0fc57bd" containerName="registry-server" Oct 08 17:09:33 crc kubenswrapper[4624]: I1008 17:09:33.892934 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b383164-74c2-4f02-ba23-e585e0fc57bd" containerName="registry-server" Oct 08 17:09:33 crc kubenswrapper[4624]: I1008 17:09:33.894950 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc2pt" Oct 08 17:09:34 crc kubenswrapper[4624]: I1008 17:09:34.028718 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nc2pt"] Oct 08 17:09:34 crc kubenswrapper[4624]: I1008 17:09:34.081927 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxzjg\" (UniqueName: \"kubernetes.io/projected/410f3322-d369-420b-b6cc-d2c4f0b802f1-kube-api-access-vxzjg\") pod \"redhat-operators-nc2pt\" (UID: \"410f3322-d369-420b-b6cc-d2c4f0b802f1\") " pod="openshift-marketplace/redhat-operators-nc2pt" Oct 08 17:09:34 crc kubenswrapper[4624]: I1008 17:09:34.082279 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410f3322-d369-420b-b6cc-d2c4f0b802f1-catalog-content\") pod \"redhat-operators-nc2pt\" (UID: \"410f3322-d369-420b-b6cc-d2c4f0b802f1\") " pod="openshift-marketplace/redhat-operators-nc2pt" Oct 08 17:09:34 crc kubenswrapper[4624]: I1008 17:09:34.082391 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410f3322-d369-420b-b6cc-d2c4f0b802f1-utilities\") pod \"redhat-operators-nc2pt\" (UID: \"410f3322-d369-420b-b6cc-d2c4f0b802f1\") " pod="openshift-marketplace/redhat-operators-nc2pt" Oct 08 17:09:34 crc kubenswrapper[4624]: I1008 17:09:34.184156 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxzjg\" (UniqueName: \"kubernetes.io/projected/410f3322-d369-420b-b6cc-d2c4f0b802f1-kube-api-access-vxzjg\") pod \"redhat-operators-nc2pt\" (UID: \"410f3322-d369-420b-b6cc-d2c4f0b802f1\") " pod="openshift-marketplace/redhat-operators-nc2pt" Oct 08 17:09:34 crc kubenswrapper[4624]: I1008 17:09:34.184226 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410f3322-d369-420b-b6cc-d2c4f0b802f1-catalog-content\") pod \"redhat-operators-nc2pt\" (UID: \"410f3322-d369-420b-b6cc-d2c4f0b802f1\") " pod="openshift-marketplace/redhat-operators-nc2pt" Oct 08 17:09:34 crc kubenswrapper[4624]: I1008 17:09:34.184294 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410f3322-d369-420b-b6cc-d2c4f0b802f1-utilities\") pod \"redhat-operators-nc2pt\" (UID: \"410f3322-d369-420b-b6cc-d2c4f0b802f1\") " pod="openshift-marketplace/redhat-operators-nc2pt" Oct 08 17:09:34 crc kubenswrapper[4624]: I1008 17:09:34.184756 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410f3322-d369-420b-b6cc-d2c4f0b802f1-utilities\") pod \"redhat-operators-nc2pt\" (UID: \"410f3322-d369-420b-b6cc-d2c4f0b802f1\") " pod="openshift-marketplace/redhat-operators-nc2pt" Oct 08 17:09:34 crc kubenswrapper[4624]: I1008 17:09:34.185206 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410f3322-d369-420b-b6cc-d2c4f0b802f1-catalog-content\") pod \"redhat-operators-nc2pt\" (UID: \"410f3322-d369-420b-b6cc-d2c4f0b802f1\") " pod="openshift-marketplace/redhat-operators-nc2pt" Oct 08 17:09:34 crc kubenswrapper[4624]: I1008 17:09:34.224412 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxzjg\" (UniqueName: \"kubernetes.io/projected/410f3322-d369-420b-b6cc-d2c4f0b802f1-kube-api-access-vxzjg\") pod \"redhat-operators-nc2pt\" (UID: \"410f3322-d369-420b-b6cc-d2c4f0b802f1\") " pod="openshift-marketplace/redhat-operators-nc2pt" Oct 08 17:09:34 crc kubenswrapper[4624]: I1008 17:09:34.227577 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc2pt" Oct 08 17:09:34 crc kubenswrapper[4624]: I1008 17:09:34.858124 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nc2pt"] Oct 08 17:09:34 crc kubenswrapper[4624]: I1008 17:09:34.882305 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc2pt" event={"ID":"410f3322-d369-420b-b6cc-d2c4f0b802f1","Type":"ContainerStarted","Data":"148df9683db71775eea5cff2268ecda74103297a3e2089024c0933cd4b0287af"} Oct 08 17:09:35 crc kubenswrapper[4624]: I1008 17:09:35.898526 4624 generic.go:334] "Generic (PLEG): container finished" podID="410f3322-d369-420b-b6cc-d2c4f0b802f1" containerID="91bfbcfd6e3234d194b583cd3d791b3180e491fcb197a1bc065510dec16fc187" exitCode=0 Oct 08 17:09:35 crc kubenswrapper[4624]: I1008 17:09:35.898748 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc2pt" event={"ID":"410f3322-d369-420b-b6cc-d2c4f0b802f1","Type":"ContainerDied","Data":"91bfbcfd6e3234d194b583cd3d791b3180e491fcb197a1bc065510dec16fc187"} Oct 08 17:09:37 crc kubenswrapper[4624]: I1008 17:09:37.926100 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc2pt" event={"ID":"410f3322-d369-420b-b6cc-d2c4f0b802f1","Type":"ContainerStarted","Data":"c54c68ff1285f38f559db46bfcc4e3d2fc5405478136ca2fbbe7373399fd3022"} Oct 08 17:09:40 crc kubenswrapper[4624]: I1008 17:09:40.466336 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:09:40 crc kubenswrapper[4624]: E1008 17:09:40.466888 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:09:43 crc kubenswrapper[4624]: I1008 17:09:43.988115 4624 generic.go:334] "Generic (PLEG): container finished" podID="410f3322-d369-420b-b6cc-d2c4f0b802f1" containerID="c54c68ff1285f38f559db46bfcc4e3d2fc5405478136ca2fbbe7373399fd3022" exitCode=0 Oct 08 17:09:43 crc kubenswrapper[4624]: I1008 17:09:43.988224 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc2pt" event={"ID":"410f3322-d369-420b-b6cc-d2c4f0b802f1","Type":"ContainerDied","Data":"c54c68ff1285f38f559db46bfcc4e3d2fc5405478136ca2fbbe7373399fd3022"} Oct 08 17:09:45 crc kubenswrapper[4624]: I1008 17:09:45.001547 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc2pt" event={"ID":"410f3322-d369-420b-b6cc-d2c4f0b802f1","Type":"ContainerStarted","Data":"3fd92ba74a2edbf23c480211652c64b06d1b6c91ca73142154ce365e98f10997"} Oct 08 17:09:45 crc kubenswrapper[4624]: I1008 17:09:45.027515 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nc2pt" podStartSLOduration=3.4780347799999998 podStartE2EDuration="12.02748942s" podCreationTimestamp="2025-10-08 17:09:33 +0000 UTC" firstStartedPulling="2025-10-08 17:09:35.903260099 +0000 UTC m=+10001.054195216" lastFinishedPulling="2025-10-08 17:09:44.452714779 +0000 UTC m=+10009.603649856" observedRunningTime="2025-10-08 17:09:45.018851728 +0000 UTC m=+10010.169786805" watchObservedRunningTime="2025-10-08 17:09:45.02748942 +0000 UTC m=+10010.178424497" Oct 08 17:09:54 crc kubenswrapper[4624]: I1008 17:09:54.227817 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nc2pt" Oct 08 17:09:54 crc kubenswrapper[4624]: I1008 17:09:54.229603 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nc2pt" Oct 08 17:09:55 crc kubenswrapper[4624]: I1008 17:09:55.277935 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nc2pt" podUID="410f3322-d369-420b-b6cc-d2c4f0b802f1" containerName="registry-server" probeResult="failure" output=< Oct 08 17:09:55 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 17:09:55 crc kubenswrapper[4624]: > Oct 08 17:09:55 crc kubenswrapper[4624]: I1008 17:09:55.477671 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:09:55 crc kubenswrapper[4624]: E1008 17:09:55.478089 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:10:05 crc kubenswrapper[4624]: I1008 17:10:05.687482 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nc2pt" podUID="410f3322-d369-420b-b6cc-d2c4f0b802f1" containerName="registry-server" probeResult="failure" output=< Oct 08 17:10:05 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 17:10:05 crc kubenswrapper[4624]: > Oct 08 17:10:06 crc kubenswrapper[4624]: I1008 17:10:06.466492 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:10:06 crc kubenswrapper[4624]: E1008 17:10:06.467046 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:10:06 crc kubenswrapper[4624]: I1008 17:10:06.654881 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4wkhl"] Oct 08 17:10:06 crc kubenswrapper[4624]: I1008 17:10:06.660504 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wkhl" Oct 08 17:10:06 crc kubenswrapper[4624]: I1008 17:10:06.673537 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4wkhl"] Oct 08 17:10:06 crc kubenswrapper[4624]: I1008 17:10:06.830353 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b72f805-81f6-4db9-b61d-b247b434ec18-catalog-content\") pod \"certified-operators-4wkhl\" (UID: \"8b72f805-81f6-4db9-b61d-b247b434ec18\") " pod="openshift-marketplace/certified-operators-4wkhl" Oct 08 17:10:06 crc kubenswrapper[4624]: I1008 17:10:06.830536 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b72f805-81f6-4db9-b61d-b247b434ec18-utilities\") pod \"certified-operators-4wkhl\" (UID: \"8b72f805-81f6-4db9-b61d-b247b434ec18\") " pod="openshift-marketplace/certified-operators-4wkhl" Oct 08 17:10:06 crc kubenswrapper[4624]: I1008 17:10:06.830684 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bffhh\" (UniqueName: \"kubernetes.io/projected/8b72f805-81f6-4db9-b61d-b247b434ec18-kube-api-access-bffhh\") pod \"certified-operators-4wkhl\" (UID: \"8b72f805-81f6-4db9-b61d-b247b434ec18\") " pod="openshift-marketplace/certified-operators-4wkhl" Oct 08 17:10:06 crc kubenswrapper[4624]: I1008 17:10:06.932872 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b72f805-81f6-4db9-b61d-b247b434ec18-catalog-content\") pod \"certified-operators-4wkhl\" (UID: \"8b72f805-81f6-4db9-b61d-b247b434ec18\") " pod="openshift-marketplace/certified-operators-4wkhl" Oct 08 17:10:06 crc kubenswrapper[4624]: I1008 17:10:06.932985 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b72f805-81f6-4db9-b61d-b247b434ec18-utilities\") pod \"certified-operators-4wkhl\" (UID: \"8b72f805-81f6-4db9-b61d-b247b434ec18\") " pod="openshift-marketplace/certified-operators-4wkhl" Oct 08 17:10:06 crc kubenswrapper[4624]: I1008 17:10:06.933103 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bffhh\" (UniqueName: \"kubernetes.io/projected/8b72f805-81f6-4db9-b61d-b247b434ec18-kube-api-access-bffhh\") pod \"certified-operators-4wkhl\" (UID: \"8b72f805-81f6-4db9-b61d-b247b434ec18\") " pod="openshift-marketplace/certified-operators-4wkhl" Oct 08 17:10:06 crc kubenswrapper[4624]: I1008 17:10:06.933474 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b72f805-81f6-4db9-b61d-b247b434ec18-catalog-content\") pod \"certified-operators-4wkhl\" (UID: \"8b72f805-81f6-4db9-b61d-b247b434ec18\") " pod="openshift-marketplace/certified-operators-4wkhl" Oct 08 17:10:06 crc kubenswrapper[4624]: I1008 17:10:06.933493 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b72f805-81f6-4db9-b61d-b247b434ec18-utilities\") pod \"certified-operators-4wkhl\" (UID: \"8b72f805-81f6-4db9-b61d-b247b434ec18\") " pod="openshift-marketplace/certified-operators-4wkhl" Oct 08 17:10:06 crc kubenswrapper[4624]: I1008 17:10:06.954415 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bffhh\" (UniqueName: \"kubernetes.io/projected/8b72f805-81f6-4db9-b61d-b247b434ec18-kube-api-access-bffhh\") pod \"certified-operators-4wkhl\" (UID: \"8b72f805-81f6-4db9-b61d-b247b434ec18\") " pod="openshift-marketplace/certified-operators-4wkhl" Oct 08 17:10:06 crc kubenswrapper[4624]: I1008 17:10:06.985899 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wkhl" Oct 08 17:10:07 crc kubenswrapper[4624]: I1008 17:10:07.695278 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4wkhl"] Oct 08 17:10:07 crc kubenswrapper[4624]: W1008 17:10:07.700795 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b72f805_81f6_4db9_b61d_b247b434ec18.slice/crio-85e76035252584e4853428d496745e7502cf3939ea4c6d46ffd0326a5a246bcb WatchSource:0}: Error finding container 85e76035252584e4853428d496745e7502cf3939ea4c6d46ffd0326a5a246bcb: Status 404 returned error can't find the container with id 85e76035252584e4853428d496745e7502cf3939ea4c6d46ffd0326a5a246bcb Oct 08 17:10:08 crc kubenswrapper[4624]: I1008 17:10:08.226414 4624 generic.go:334] "Generic (PLEG): container finished" podID="8b72f805-81f6-4db9-b61d-b247b434ec18" containerID="79d8e6064ddb147a9d907caf60117e39a4d008df6ec687bc701eeb6f4ab7def2" exitCode=0 Oct 08 17:10:08 crc kubenswrapper[4624]: I1008 17:10:08.226502 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wkhl" event={"ID":"8b72f805-81f6-4db9-b61d-b247b434ec18","Type":"ContainerDied","Data":"79d8e6064ddb147a9d907caf60117e39a4d008df6ec687bc701eeb6f4ab7def2"} Oct 08 17:10:08 crc kubenswrapper[4624]: I1008 17:10:08.226885 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wkhl" event={"ID":"8b72f805-81f6-4db9-b61d-b247b434ec18","Type":"ContainerStarted","Data":"85e76035252584e4853428d496745e7502cf3939ea4c6d46ffd0326a5a246bcb"} Oct 08 17:10:08 crc kubenswrapper[4624]: I1008 17:10:08.231306 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 17:10:10 crc kubenswrapper[4624]: I1008 17:10:10.246979 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wkhl" event={"ID":"8b72f805-81f6-4db9-b61d-b247b434ec18","Type":"ContainerStarted","Data":"fd3c54bab5bef3f3d70232f6fd4d1bb4c6d1086c0aa31f8211ffb89ed5382c24"} Oct 08 17:10:11 crc kubenswrapper[4624]: I1008 17:10:11.258061 4624 generic.go:334] "Generic (PLEG): container finished" podID="8b72f805-81f6-4db9-b61d-b247b434ec18" containerID="fd3c54bab5bef3f3d70232f6fd4d1bb4c6d1086c0aa31f8211ffb89ed5382c24" exitCode=0 Oct 08 17:10:11 crc kubenswrapper[4624]: I1008 17:10:11.258117 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wkhl" event={"ID":"8b72f805-81f6-4db9-b61d-b247b434ec18","Type":"ContainerDied","Data":"fd3c54bab5bef3f3d70232f6fd4d1bb4c6d1086c0aa31f8211ffb89ed5382c24"} Oct 08 17:10:12 crc kubenswrapper[4624]: I1008 17:10:12.269271 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wkhl" event={"ID":"8b72f805-81f6-4db9-b61d-b247b434ec18","Type":"ContainerStarted","Data":"80ba491734614d357850281f0b6d44cce404f56f33af21cc2e36f70289ebaacc"} Oct 08 17:10:12 crc kubenswrapper[4624]: I1008 17:10:12.286961 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4wkhl" podStartSLOduration=2.862678172 podStartE2EDuration="6.286945285s" podCreationTimestamp="2025-10-08 17:10:06 +0000 UTC" firstStartedPulling="2025-10-08 17:10:08.230356748 +0000 UTC m=+10033.381291825" lastFinishedPulling="2025-10-08 17:10:11.654623861 +0000 UTC m=+10036.805558938" observedRunningTime="2025-10-08 17:10:12.286092613 +0000 UTC m=+10037.437027680" watchObservedRunningTime="2025-10-08 17:10:12.286945285 +0000 UTC m=+10037.437880362" Oct 08 17:10:15 crc kubenswrapper[4624]: I1008 17:10:15.303959 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nc2pt" podUID="410f3322-d369-420b-b6cc-d2c4f0b802f1" containerName="registry-server" probeResult="failure" output=< Oct 08 17:10:15 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 17:10:15 crc kubenswrapper[4624]: > Oct 08 17:10:16 crc kubenswrapper[4624]: I1008 17:10:16.986155 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4wkhl" Oct 08 17:10:16 crc kubenswrapper[4624]: I1008 17:10:16.986540 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4wkhl" Oct 08 17:10:17 crc kubenswrapper[4624]: I1008 17:10:17.466080 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:10:17 crc kubenswrapper[4624]: E1008 17:10:17.466433 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:10:18 crc kubenswrapper[4624]: I1008 17:10:18.061405 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4wkhl" podUID="8b72f805-81f6-4db9-b61d-b247b434ec18" containerName="registry-server" probeResult="failure" output=< Oct 08 17:10:18 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 17:10:18 crc kubenswrapper[4624]: > Oct 08 17:10:24 crc kubenswrapper[4624]: I1008 17:10:24.292248 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nc2pt" Oct 08 17:10:24 crc kubenswrapper[4624]: I1008 17:10:24.355977 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nc2pt" Oct 08 17:10:24 crc kubenswrapper[4624]: I1008 17:10:24.536412 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nc2pt"] Oct 08 17:10:25 crc kubenswrapper[4624]: I1008 17:10:25.393891 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nc2pt" podUID="410f3322-d369-420b-b6cc-d2c4f0b802f1" containerName="registry-server" containerID="cri-o://3fd92ba74a2edbf23c480211652c64b06d1b6c91ca73142154ce365e98f10997" gracePeriod=2 Oct 08 17:10:25 crc kubenswrapper[4624]: E1008 17:10:25.763523 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod410f3322_d369_420b_b6cc_d2c4f0b802f1.slice/crio-3fd92ba74a2edbf23c480211652c64b06d1b6c91ca73142154ce365e98f10997.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod410f3322_d369_420b_b6cc_d2c4f0b802f1.slice/crio-conmon-3fd92ba74a2edbf23c480211652c64b06d1b6c91ca73142154ce365e98f10997.scope\": RecentStats: unable to find data in memory cache]" Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.380625 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc2pt" Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.411059 4624 generic.go:334] "Generic (PLEG): container finished" podID="410f3322-d369-420b-b6cc-d2c4f0b802f1" containerID="3fd92ba74a2edbf23c480211652c64b06d1b6c91ca73142154ce365e98f10997" exitCode=0 Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.411125 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc2pt" event={"ID":"410f3322-d369-420b-b6cc-d2c4f0b802f1","Type":"ContainerDied","Data":"3fd92ba74a2edbf23c480211652c64b06d1b6c91ca73142154ce365e98f10997"} Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.411176 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc2pt" event={"ID":"410f3322-d369-420b-b6cc-d2c4f0b802f1","Type":"ContainerDied","Data":"148df9683db71775eea5cff2268ecda74103297a3e2089024c0933cd4b0287af"} Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.411196 4624 scope.go:117] "RemoveContainer" containerID="3fd92ba74a2edbf23c480211652c64b06d1b6c91ca73142154ce365e98f10997" Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.411750 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc2pt" Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.456762 4624 scope.go:117] "RemoveContainer" containerID="c54c68ff1285f38f559db46bfcc4e3d2fc5405478136ca2fbbe7373399fd3022" Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.467787 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410f3322-d369-420b-b6cc-d2c4f0b802f1-utilities\") pod \"410f3322-d369-420b-b6cc-d2c4f0b802f1\" (UID: \"410f3322-d369-420b-b6cc-d2c4f0b802f1\") " Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.467834 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410f3322-d369-420b-b6cc-d2c4f0b802f1-catalog-content\") pod \"410f3322-d369-420b-b6cc-d2c4f0b802f1\" (UID: \"410f3322-d369-420b-b6cc-d2c4f0b802f1\") " Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.467923 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxzjg\" (UniqueName: \"kubernetes.io/projected/410f3322-d369-420b-b6cc-d2c4f0b802f1-kube-api-access-vxzjg\") pod \"410f3322-d369-420b-b6cc-d2c4f0b802f1\" (UID: \"410f3322-d369-420b-b6cc-d2c4f0b802f1\") " Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.469824 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/410f3322-d369-420b-b6cc-d2c4f0b802f1-utilities" (OuterVolumeSpecName: "utilities") pod "410f3322-d369-420b-b6cc-d2c4f0b802f1" (UID: "410f3322-d369-420b-b6cc-d2c4f0b802f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.511384 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410f3322-d369-420b-b6cc-d2c4f0b802f1-kube-api-access-vxzjg" (OuterVolumeSpecName: "kube-api-access-vxzjg") pod "410f3322-d369-420b-b6cc-d2c4f0b802f1" (UID: "410f3322-d369-420b-b6cc-d2c4f0b802f1"). InnerVolumeSpecName "kube-api-access-vxzjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.521680 4624 scope.go:117] "RemoveContainer" containerID="91bfbcfd6e3234d194b583cd3d791b3180e491fcb197a1bc065510dec16fc187" Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.571815 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410f3322-d369-420b-b6cc-d2c4f0b802f1-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.571856 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxzjg\" (UniqueName: \"kubernetes.io/projected/410f3322-d369-420b-b6cc-d2c4f0b802f1-kube-api-access-vxzjg\") on node \"crc\" DevicePath \"\"" Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.597789 4624 scope.go:117] "RemoveContainer" containerID="3fd92ba74a2edbf23c480211652c64b06d1b6c91ca73142154ce365e98f10997" Oct 08 17:10:26 crc kubenswrapper[4624]: E1008 17:10:26.598518 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fd92ba74a2edbf23c480211652c64b06d1b6c91ca73142154ce365e98f10997\": container with ID starting with 3fd92ba74a2edbf23c480211652c64b06d1b6c91ca73142154ce365e98f10997 not found: ID does not exist" containerID="3fd92ba74a2edbf23c480211652c64b06d1b6c91ca73142154ce365e98f10997" Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.598561 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd92ba74a2edbf23c480211652c64b06d1b6c91ca73142154ce365e98f10997"} err="failed to get container status \"3fd92ba74a2edbf23c480211652c64b06d1b6c91ca73142154ce365e98f10997\": rpc error: code = NotFound desc = could not find container \"3fd92ba74a2edbf23c480211652c64b06d1b6c91ca73142154ce365e98f10997\": container with ID starting with 3fd92ba74a2edbf23c480211652c64b06d1b6c91ca73142154ce365e98f10997 not found: ID does not exist" Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.598585 4624 scope.go:117] "RemoveContainer" containerID="c54c68ff1285f38f559db46bfcc4e3d2fc5405478136ca2fbbe7373399fd3022" Oct 08 17:10:26 crc kubenswrapper[4624]: E1008 17:10:26.599233 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c54c68ff1285f38f559db46bfcc4e3d2fc5405478136ca2fbbe7373399fd3022\": container with ID starting with c54c68ff1285f38f559db46bfcc4e3d2fc5405478136ca2fbbe7373399fd3022 not found: ID does not exist" containerID="c54c68ff1285f38f559db46bfcc4e3d2fc5405478136ca2fbbe7373399fd3022" Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.599282 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c54c68ff1285f38f559db46bfcc4e3d2fc5405478136ca2fbbe7373399fd3022"} err="failed to get container status \"c54c68ff1285f38f559db46bfcc4e3d2fc5405478136ca2fbbe7373399fd3022\": rpc error: code = NotFound desc = could not find container \"c54c68ff1285f38f559db46bfcc4e3d2fc5405478136ca2fbbe7373399fd3022\": container with ID starting with c54c68ff1285f38f559db46bfcc4e3d2fc5405478136ca2fbbe7373399fd3022 not found: ID does not exist" Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.599314 4624 scope.go:117] "RemoveContainer" containerID="91bfbcfd6e3234d194b583cd3d791b3180e491fcb197a1bc065510dec16fc187" Oct 08 17:10:26 crc kubenswrapper[4624]: E1008 17:10:26.599676 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91bfbcfd6e3234d194b583cd3d791b3180e491fcb197a1bc065510dec16fc187\": container with ID starting with 91bfbcfd6e3234d194b583cd3d791b3180e491fcb197a1bc065510dec16fc187 not found: ID does not exist" containerID="91bfbcfd6e3234d194b583cd3d791b3180e491fcb197a1bc065510dec16fc187" Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.599722 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91bfbcfd6e3234d194b583cd3d791b3180e491fcb197a1bc065510dec16fc187"} err="failed to get container status \"91bfbcfd6e3234d194b583cd3d791b3180e491fcb197a1bc065510dec16fc187\": rpc error: code = NotFound desc = could not find container \"91bfbcfd6e3234d194b583cd3d791b3180e491fcb197a1bc065510dec16fc187\": container with ID starting with 91bfbcfd6e3234d194b583cd3d791b3180e491fcb197a1bc065510dec16fc187 not found: ID does not exist" Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.611123 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/410f3322-d369-420b-b6cc-d2c4f0b802f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "410f3322-d369-420b-b6cc-d2c4f0b802f1" (UID: "410f3322-d369-420b-b6cc-d2c4f0b802f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.673932 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410f3322-d369-420b-b6cc-d2c4f0b802f1-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.753870 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nc2pt"] Oct 08 17:10:26 crc kubenswrapper[4624]: I1008 17:10:26.761911 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nc2pt"] Oct 08 17:10:27 crc kubenswrapper[4624]: I1008 17:10:27.049792 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4wkhl" Oct 08 17:10:27 crc kubenswrapper[4624]: I1008 17:10:27.109199 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4wkhl" Oct 08 17:10:27 crc kubenswrapper[4624]: I1008 17:10:27.481265 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="410f3322-d369-420b-b6cc-d2c4f0b802f1" path="/var/lib/kubelet/pods/410f3322-d369-420b-b6cc-d2c4f0b802f1/volumes" Oct 08 17:10:29 crc kubenswrapper[4624]: I1008 17:10:29.334392 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4wkhl"] Oct 08 17:10:29 crc kubenswrapper[4624]: I1008 17:10:29.334966 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4wkhl" podUID="8b72f805-81f6-4db9-b61d-b247b434ec18" containerName="registry-server" containerID="cri-o://80ba491734614d357850281f0b6d44cce404f56f33af21cc2e36f70289ebaacc" gracePeriod=2 Oct 08 17:10:29 crc kubenswrapper[4624]: I1008 17:10:29.912296 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wkhl" Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.045906 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bffhh\" (UniqueName: \"kubernetes.io/projected/8b72f805-81f6-4db9-b61d-b247b434ec18-kube-api-access-bffhh\") pod \"8b72f805-81f6-4db9-b61d-b247b434ec18\" (UID: \"8b72f805-81f6-4db9-b61d-b247b434ec18\") " Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.046059 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b72f805-81f6-4db9-b61d-b247b434ec18-catalog-content\") pod \"8b72f805-81f6-4db9-b61d-b247b434ec18\" (UID: \"8b72f805-81f6-4db9-b61d-b247b434ec18\") " Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.046262 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b72f805-81f6-4db9-b61d-b247b434ec18-utilities\") pod \"8b72f805-81f6-4db9-b61d-b247b434ec18\" (UID: \"8b72f805-81f6-4db9-b61d-b247b434ec18\") " Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.047193 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b72f805-81f6-4db9-b61d-b247b434ec18-utilities" (OuterVolumeSpecName: "utilities") pod "8b72f805-81f6-4db9-b61d-b247b434ec18" (UID: "8b72f805-81f6-4db9-b61d-b247b434ec18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.056366 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b72f805-81f6-4db9-b61d-b247b434ec18-kube-api-access-bffhh" (OuterVolumeSpecName: "kube-api-access-bffhh") pod "8b72f805-81f6-4db9-b61d-b247b434ec18" (UID: "8b72f805-81f6-4db9-b61d-b247b434ec18"). InnerVolumeSpecName "kube-api-access-bffhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.109474 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b72f805-81f6-4db9-b61d-b247b434ec18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b72f805-81f6-4db9-b61d-b247b434ec18" (UID: "8b72f805-81f6-4db9-b61d-b247b434ec18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.148534 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b72f805-81f6-4db9-b61d-b247b434ec18-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.148576 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bffhh\" (UniqueName: \"kubernetes.io/projected/8b72f805-81f6-4db9-b61d-b247b434ec18-kube-api-access-bffhh\") on node \"crc\" DevicePath \"\"" Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.148587 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b72f805-81f6-4db9-b61d-b247b434ec18-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.454070 4624 generic.go:334] "Generic (PLEG): container finished" podID="8b72f805-81f6-4db9-b61d-b247b434ec18" containerID="80ba491734614d357850281f0b6d44cce404f56f33af21cc2e36f70289ebaacc" exitCode=0 Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.454114 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wkhl" event={"ID":"8b72f805-81f6-4db9-b61d-b247b434ec18","Type":"ContainerDied","Data":"80ba491734614d357850281f0b6d44cce404f56f33af21cc2e36f70289ebaacc"} Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.454143 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wkhl" event={"ID":"8b72f805-81f6-4db9-b61d-b247b434ec18","Type":"ContainerDied","Data":"85e76035252584e4853428d496745e7502cf3939ea4c6d46ffd0326a5a246bcb"} Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.454159 4624 scope.go:117] "RemoveContainer" containerID="80ba491734614d357850281f0b6d44cce404f56f33af21cc2e36f70289ebaacc" Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.454769 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wkhl" Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.465893 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:10:30 crc kubenswrapper[4624]: E1008 17:10:30.466236 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.495292 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4wkhl"] Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.504512 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4wkhl"] Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.520716 4624 scope.go:117] "RemoveContainer" containerID="fd3c54bab5bef3f3d70232f6fd4d1bb4c6d1086c0aa31f8211ffb89ed5382c24" Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.551414 4624 scope.go:117] "RemoveContainer" containerID="79d8e6064ddb147a9d907caf60117e39a4d008df6ec687bc701eeb6f4ab7def2" Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.615805 4624 scope.go:117] "RemoveContainer" containerID="80ba491734614d357850281f0b6d44cce404f56f33af21cc2e36f70289ebaacc" Oct 08 17:10:30 crc kubenswrapper[4624]: E1008 17:10:30.616327 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80ba491734614d357850281f0b6d44cce404f56f33af21cc2e36f70289ebaacc\": container with ID starting with 80ba491734614d357850281f0b6d44cce404f56f33af21cc2e36f70289ebaacc not found: ID does not exist" containerID="80ba491734614d357850281f0b6d44cce404f56f33af21cc2e36f70289ebaacc" Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.616366 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ba491734614d357850281f0b6d44cce404f56f33af21cc2e36f70289ebaacc"} err="failed to get container status \"80ba491734614d357850281f0b6d44cce404f56f33af21cc2e36f70289ebaacc\": rpc error: code = NotFound desc = could not find container \"80ba491734614d357850281f0b6d44cce404f56f33af21cc2e36f70289ebaacc\": container with ID starting with 80ba491734614d357850281f0b6d44cce404f56f33af21cc2e36f70289ebaacc not found: ID does not exist" Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.616394 4624 scope.go:117] "RemoveContainer" containerID="fd3c54bab5bef3f3d70232f6fd4d1bb4c6d1086c0aa31f8211ffb89ed5382c24" Oct 08 17:10:30 crc kubenswrapper[4624]: E1008 17:10:30.617105 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd3c54bab5bef3f3d70232f6fd4d1bb4c6d1086c0aa31f8211ffb89ed5382c24\": container with ID starting with fd3c54bab5bef3f3d70232f6fd4d1bb4c6d1086c0aa31f8211ffb89ed5382c24 not found: ID does not exist" containerID="fd3c54bab5bef3f3d70232f6fd4d1bb4c6d1086c0aa31f8211ffb89ed5382c24" Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.617127 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd3c54bab5bef3f3d70232f6fd4d1bb4c6d1086c0aa31f8211ffb89ed5382c24"} err="failed to get container status \"fd3c54bab5bef3f3d70232f6fd4d1bb4c6d1086c0aa31f8211ffb89ed5382c24\": rpc error: code = NotFound desc = could not find container \"fd3c54bab5bef3f3d70232f6fd4d1bb4c6d1086c0aa31f8211ffb89ed5382c24\": container with ID starting with fd3c54bab5bef3f3d70232f6fd4d1bb4c6d1086c0aa31f8211ffb89ed5382c24 not found: ID does not exist" Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.617160 4624 scope.go:117] "RemoveContainer" containerID="79d8e6064ddb147a9d907caf60117e39a4d008df6ec687bc701eeb6f4ab7def2" Oct 08 17:10:30 crc kubenswrapper[4624]: E1008 17:10:30.617422 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79d8e6064ddb147a9d907caf60117e39a4d008df6ec687bc701eeb6f4ab7def2\": container with ID starting with 79d8e6064ddb147a9d907caf60117e39a4d008df6ec687bc701eeb6f4ab7def2 not found: ID does not exist" containerID="79d8e6064ddb147a9d907caf60117e39a4d008df6ec687bc701eeb6f4ab7def2" Oct 08 17:10:30 crc kubenswrapper[4624]: I1008 17:10:30.617445 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79d8e6064ddb147a9d907caf60117e39a4d008df6ec687bc701eeb6f4ab7def2"} err="failed to get container status \"79d8e6064ddb147a9d907caf60117e39a4d008df6ec687bc701eeb6f4ab7def2\": rpc error: code = NotFound desc = could not find container \"79d8e6064ddb147a9d907caf60117e39a4d008df6ec687bc701eeb6f4ab7def2\": container with ID starting with 79d8e6064ddb147a9d907caf60117e39a4d008df6ec687bc701eeb6f4ab7def2 not found: ID does not exist" Oct 08 17:10:31 crc kubenswrapper[4624]: I1008 17:10:31.481365 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b72f805-81f6-4db9-b61d-b247b434ec18" path="/var/lib/kubelet/pods/8b72f805-81f6-4db9-b61d-b247b434ec18/volumes" Oct 08 17:10:42 crc kubenswrapper[4624]: I1008 17:10:42.466342 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:10:42 crc kubenswrapper[4624]: E1008 17:10:42.467225 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:10:55 crc kubenswrapper[4624]: I1008 17:10:55.478363 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:10:55 crc kubenswrapper[4624]: E1008 17:10:55.480989 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:11:08 crc kubenswrapper[4624]: I1008 17:11:08.466424 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:11:08 crc kubenswrapper[4624]: E1008 17:11:08.467397 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:11:19 crc kubenswrapper[4624]: I1008 17:11:19.465797 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:11:19 crc kubenswrapper[4624]: E1008 17:11:19.466544 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:11:32 crc kubenswrapper[4624]: I1008 17:11:32.466167 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:11:32 crc kubenswrapper[4624]: E1008 17:11:32.467325 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:11:44 crc kubenswrapper[4624]: I1008 17:11:44.465482 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:11:44 crc kubenswrapper[4624]: E1008 17:11:44.466488 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:11:57 crc kubenswrapper[4624]: I1008 17:11:57.467208 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:11:57 crc kubenswrapper[4624]: E1008 17:11:57.468108 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:12:11 crc kubenswrapper[4624]: I1008 17:12:11.467900 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:12:11 crc kubenswrapper[4624]: E1008 17:12:11.468911 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:12:24 crc kubenswrapper[4624]: I1008 17:12:24.467414 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:12:24 crc kubenswrapper[4624]: E1008 17:12:24.468100 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:12:34 crc kubenswrapper[4624]: I1008 17:12:34.946439 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k8blv"] Oct 08 17:12:34 crc kubenswrapper[4624]: E1008 17:12:34.948453 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b72f805-81f6-4db9-b61d-b247b434ec18" containerName="registry-server" Oct 08 17:12:34 crc kubenswrapper[4624]: I1008 17:12:34.948482 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b72f805-81f6-4db9-b61d-b247b434ec18" containerName="registry-server" Oct 08 17:12:34 crc kubenswrapper[4624]: E1008 17:12:34.948536 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b72f805-81f6-4db9-b61d-b247b434ec18" containerName="extract-utilities" Oct 08 17:12:34 crc kubenswrapper[4624]: I1008 17:12:34.948547 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b72f805-81f6-4db9-b61d-b247b434ec18" containerName="extract-utilities" Oct 08 17:12:34 crc kubenswrapper[4624]: E1008 17:12:34.948575 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410f3322-d369-420b-b6cc-d2c4f0b802f1" containerName="extract-content" Oct 08 17:12:34 crc kubenswrapper[4624]: I1008 17:12:34.948585 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="410f3322-d369-420b-b6cc-d2c4f0b802f1" containerName="extract-content" Oct 08 17:12:34 crc kubenswrapper[4624]: E1008 17:12:34.948607 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410f3322-d369-420b-b6cc-d2c4f0b802f1" containerName="registry-server" Oct 08 17:12:34 crc kubenswrapper[4624]: I1008 17:12:34.948617 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="410f3322-d369-420b-b6cc-d2c4f0b802f1" containerName="registry-server" Oct 08 17:12:34 crc kubenswrapper[4624]: E1008 17:12:34.948656 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b72f805-81f6-4db9-b61d-b247b434ec18" containerName="extract-content" Oct 08 17:12:34 crc kubenswrapper[4624]: I1008 17:12:34.948669 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b72f805-81f6-4db9-b61d-b247b434ec18" containerName="extract-content" Oct 08 17:12:34 crc kubenswrapper[4624]: E1008 17:12:34.948700 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410f3322-d369-420b-b6cc-d2c4f0b802f1" containerName="extract-utilities" Oct 08 17:12:34 crc kubenswrapper[4624]: I1008 17:12:34.948709 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="410f3322-d369-420b-b6cc-d2c4f0b802f1" containerName="extract-utilities" Oct 08 17:12:34 crc kubenswrapper[4624]: I1008 17:12:34.949381 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b72f805-81f6-4db9-b61d-b247b434ec18" containerName="registry-server" Oct 08 17:12:34 crc kubenswrapper[4624]: I1008 17:12:34.949437 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="410f3322-d369-420b-b6cc-d2c4f0b802f1" containerName="registry-server" Oct 08 17:12:34 crc kubenswrapper[4624]: I1008 17:12:34.957469 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8blv" Oct 08 17:12:35 crc kubenswrapper[4624]: I1008 17:12:35.031259 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c67e3e9c-b20a-413c-9835-35057b2872eb-utilities\") pod \"redhat-marketplace-k8blv\" (UID: \"c67e3e9c-b20a-413c-9835-35057b2872eb\") " pod="openshift-marketplace/redhat-marketplace-k8blv" Oct 08 17:12:35 crc kubenswrapper[4624]: I1008 17:12:35.033182 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c67e3e9c-b20a-413c-9835-35057b2872eb-catalog-content\") pod \"redhat-marketplace-k8blv\" (UID: \"c67e3e9c-b20a-413c-9835-35057b2872eb\") " pod="openshift-marketplace/redhat-marketplace-k8blv" Oct 08 17:12:35 crc kubenswrapper[4624]: I1008 17:12:35.033366 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7x6d\" (UniqueName: \"kubernetes.io/projected/c67e3e9c-b20a-413c-9835-35057b2872eb-kube-api-access-z7x6d\") pod \"redhat-marketplace-k8blv\" (UID: \"c67e3e9c-b20a-413c-9835-35057b2872eb\") " pod="openshift-marketplace/redhat-marketplace-k8blv" Oct 08 17:12:35 crc kubenswrapper[4624]: I1008 17:12:35.034047 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8blv"] Oct 08 17:12:35 crc kubenswrapper[4624]: I1008 17:12:35.135807 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c67e3e9c-b20a-413c-9835-35057b2872eb-utilities\") pod \"redhat-marketplace-k8blv\" (UID: \"c67e3e9c-b20a-413c-9835-35057b2872eb\") " pod="openshift-marketplace/redhat-marketplace-k8blv" Oct 08 17:12:35 crc kubenswrapper[4624]: I1008 17:12:35.135880 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c67e3e9c-b20a-413c-9835-35057b2872eb-catalog-content\") pod \"redhat-marketplace-k8blv\" (UID: \"c67e3e9c-b20a-413c-9835-35057b2872eb\") " pod="openshift-marketplace/redhat-marketplace-k8blv" Oct 08 17:12:35 crc kubenswrapper[4624]: I1008 17:12:35.135944 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7x6d\" (UniqueName: \"kubernetes.io/projected/c67e3e9c-b20a-413c-9835-35057b2872eb-kube-api-access-z7x6d\") pod \"redhat-marketplace-k8blv\" (UID: \"c67e3e9c-b20a-413c-9835-35057b2872eb\") " pod="openshift-marketplace/redhat-marketplace-k8blv" Oct 08 17:12:35 crc kubenswrapper[4624]: I1008 17:12:35.136766 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c67e3e9c-b20a-413c-9835-35057b2872eb-utilities\") pod \"redhat-marketplace-k8blv\" (UID: \"c67e3e9c-b20a-413c-9835-35057b2872eb\") " pod="openshift-marketplace/redhat-marketplace-k8blv" Oct 08 17:12:35 crc kubenswrapper[4624]: I1008 17:12:35.136990 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c67e3e9c-b20a-413c-9835-35057b2872eb-catalog-content\") pod \"redhat-marketplace-k8blv\" (UID: \"c67e3e9c-b20a-413c-9835-35057b2872eb\") " pod="openshift-marketplace/redhat-marketplace-k8blv" Oct 08 17:12:35 crc kubenswrapper[4624]: I1008 17:12:35.170525 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7x6d\" (UniqueName: \"kubernetes.io/projected/c67e3e9c-b20a-413c-9835-35057b2872eb-kube-api-access-z7x6d\") pod \"redhat-marketplace-k8blv\" (UID: \"c67e3e9c-b20a-413c-9835-35057b2872eb\") " pod="openshift-marketplace/redhat-marketplace-k8blv" Oct 08 17:12:35 crc kubenswrapper[4624]: I1008 17:12:35.282765 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8blv" Oct 08 17:12:35 crc kubenswrapper[4624]: I1008 17:12:35.918098 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8blv"] Oct 08 17:12:36 crc kubenswrapper[4624]: I1008 17:12:36.900700 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8blv" event={"ID":"c67e3e9c-b20a-413c-9835-35057b2872eb","Type":"ContainerStarted","Data":"1256d714a4780b4bde0f24fc177f96e5d80a6882f771cc02e6aad6723df73339"} Oct 08 17:12:36 crc kubenswrapper[4624]: I1008 17:12:36.901308 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8blv" event={"ID":"c67e3e9c-b20a-413c-9835-35057b2872eb","Type":"ContainerStarted","Data":"36d68c2f38cb4977edf66c47164592834e493868e8994f2028dbb0058b7b97d8"} Oct 08 17:12:37 crc kubenswrapper[4624]: I1008 17:12:37.919932 4624 generic.go:334] "Generic (PLEG): container finished" podID="c67e3e9c-b20a-413c-9835-35057b2872eb" containerID="1256d714a4780b4bde0f24fc177f96e5d80a6882f771cc02e6aad6723df73339" exitCode=0 Oct 08 17:12:37 crc kubenswrapper[4624]: I1008 17:12:37.920007 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8blv" event={"ID":"c67e3e9c-b20a-413c-9835-35057b2872eb","Type":"ContainerDied","Data":"1256d714a4780b4bde0f24fc177f96e5d80a6882f771cc02e6aad6723df73339"} Oct 08 17:12:39 crc kubenswrapper[4624]: I1008 17:12:39.468467 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:12:39 crc kubenswrapper[4624]: I1008 17:12:39.956093 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8blv" event={"ID":"c67e3e9c-b20a-413c-9835-35057b2872eb","Type":"ContainerStarted","Data":"b607cbf522ec5403172c7010ee62cadefa315a908460df3e14dc1613a811883d"} Oct 08 17:12:39 crc kubenswrapper[4624]: I1008 17:12:39.961050 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"87349138b6a186a60530d29c54a92ba43c8f24ce21ebf028638b018e714656d7"} Oct 08 17:12:40 crc kubenswrapper[4624]: I1008 17:12:40.969946 4624 generic.go:334] "Generic (PLEG): container finished" podID="c67e3e9c-b20a-413c-9835-35057b2872eb" containerID="b607cbf522ec5403172c7010ee62cadefa315a908460df3e14dc1613a811883d" exitCode=0 Oct 08 17:12:40 crc kubenswrapper[4624]: I1008 17:12:40.970005 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8blv" event={"ID":"c67e3e9c-b20a-413c-9835-35057b2872eb","Type":"ContainerDied","Data":"b607cbf522ec5403172c7010ee62cadefa315a908460df3e14dc1613a811883d"} Oct 08 17:12:42 crc kubenswrapper[4624]: I1008 17:12:42.990659 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8blv" event={"ID":"c67e3e9c-b20a-413c-9835-35057b2872eb","Type":"ContainerStarted","Data":"3c28b1c8de134c5f388af35fe8a2d5c24e036dc35810dd99d9ae9c2f75574efe"} Oct 08 17:12:43 crc kubenswrapper[4624]: I1008 17:12:43.019384 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k8blv" podStartSLOduration=4.443604887 podStartE2EDuration="9.019346842s" podCreationTimestamp="2025-10-08 17:12:34 +0000 UTC" firstStartedPulling="2025-10-08 17:12:37.925332822 +0000 UTC m=+10183.076267939" lastFinishedPulling="2025-10-08 17:12:42.501074807 +0000 UTC m=+10187.652009894" observedRunningTime="2025-10-08 17:12:43.010455604 +0000 UTC m=+10188.161390701" watchObservedRunningTime="2025-10-08 17:12:43.019346842 +0000 UTC m=+10188.170281929" Oct 08 17:12:45 crc kubenswrapper[4624]: I1008 17:12:45.282999 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k8blv" Oct 08 17:12:45 crc kubenswrapper[4624]: I1008 17:12:45.284657 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k8blv" Oct 08 17:12:46 crc kubenswrapper[4624]: I1008 17:12:46.071556 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k8blv" Oct 08 17:12:47 crc kubenswrapper[4624]: I1008 17:12:47.077320 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k8blv" Oct 08 17:12:48 crc kubenswrapper[4624]: I1008 17:12:48.520006 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8blv"] Oct 08 17:12:49 crc kubenswrapper[4624]: I1008 17:12:49.043969 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k8blv" podUID="c67e3e9c-b20a-413c-9835-35057b2872eb" containerName="registry-server" containerID="cri-o://3c28b1c8de134c5f388af35fe8a2d5c24e036dc35810dd99d9ae9c2f75574efe" gracePeriod=2 Oct 08 17:12:49 crc kubenswrapper[4624]: I1008 17:12:49.507131 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8blv" Oct 08 17:12:49 crc kubenswrapper[4624]: I1008 17:12:49.608653 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c67e3e9c-b20a-413c-9835-35057b2872eb-utilities\") pod \"c67e3e9c-b20a-413c-9835-35057b2872eb\" (UID: \"c67e3e9c-b20a-413c-9835-35057b2872eb\") " Oct 08 17:12:49 crc kubenswrapper[4624]: I1008 17:12:49.609690 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c67e3e9c-b20a-413c-9835-35057b2872eb-utilities" (OuterVolumeSpecName: "utilities") pod "c67e3e9c-b20a-413c-9835-35057b2872eb" (UID: "c67e3e9c-b20a-413c-9835-35057b2872eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:12:49 crc kubenswrapper[4624]: I1008 17:12:49.609787 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c67e3e9c-b20a-413c-9835-35057b2872eb-catalog-content\") pod \"c67e3e9c-b20a-413c-9835-35057b2872eb\" (UID: \"c67e3e9c-b20a-413c-9835-35057b2872eb\") " Oct 08 17:12:49 crc kubenswrapper[4624]: I1008 17:12:49.609931 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7x6d\" (UniqueName: \"kubernetes.io/projected/c67e3e9c-b20a-413c-9835-35057b2872eb-kube-api-access-z7x6d\") pod \"c67e3e9c-b20a-413c-9835-35057b2872eb\" (UID: \"c67e3e9c-b20a-413c-9835-35057b2872eb\") " Oct 08 17:12:49 crc kubenswrapper[4624]: I1008 17:12:49.611405 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c67e3e9c-b20a-413c-9835-35057b2872eb-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 17:12:49 crc kubenswrapper[4624]: I1008 17:12:49.625793 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c67e3e9c-b20a-413c-9835-35057b2872eb-kube-api-access-z7x6d" (OuterVolumeSpecName: "kube-api-access-z7x6d") pod "c67e3e9c-b20a-413c-9835-35057b2872eb" (UID: "c67e3e9c-b20a-413c-9835-35057b2872eb"). InnerVolumeSpecName "kube-api-access-z7x6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:12:49 crc kubenswrapper[4624]: I1008 17:12:49.630837 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c67e3e9c-b20a-413c-9835-35057b2872eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c67e3e9c-b20a-413c-9835-35057b2872eb" (UID: "c67e3e9c-b20a-413c-9835-35057b2872eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:12:49 crc kubenswrapper[4624]: I1008 17:12:49.714079 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c67e3e9c-b20a-413c-9835-35057b2872eb-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 17:12:49 crc kubenswrapper[4624]: I1008 17:12:49.714123 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7x6d\" (UniqueName: \"kubernetes.io/projected/c67e3e9c-b20a-413c-9835-35057b2872eb-kube-api-access-z7x6d\") on node \"crc\" DevicePath \"\"" Oct 08 17:12:50 crc kubenswrapper[4624]: I1008 17:12:50.055319 4624 generic.go:334] "Generic (PLEG): container finished" podID="c67e3e9c-b20a-413c-9835-35057b2872eb" containerID="3c28b1c8de134c5f388af35fe8a2d5c24e036dc35810dd99d9ae9c2f75574efe" exitCode=0 Oct 08 17:12:50 crc kubenswrapper[4624]: I1008 17:12:50.055625 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8blv" event={"ID":"c67e3e9c-b20a-413c-9835-35057b2872eb","Type":"ContainerDied","Data":"3c28b1c8de134c5f388af35fe8a2d5c24e036dc35810dd99d9ae9c2f75574efe"} Oct 08 17:12:50 crc kubenswrapper[4624]: I1008 17:12:50.055675 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8blv" event={"ID":"c67e3e9c-b20a-413c-9835-35057b2872eb","Type":"ContainerDied","Data":"36d68c2f38cb4977edf66c47164592834e493868e8994f2028dbb0058b7b97d8"} Oct 08 17:12:50 crc kubenswrapper[4624]: I1008 17:12:50.055698 4624 scope.go:117] "RemoveContainer" containerID="3c28b1c8de134c5f388af35fe8a2d5c24e036dc35810dd99d9ae9c2f75574efe" Oct 08 17:12:50 crc kubenswrapper[4624]: I1008 17:12:50.055864 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8blv" Oct 08 17:12:50 crc kubenswrapper[4624]: I1008 17:12:50.096777 4624 scope.go:117] "RemoveContainer" containerID="b607cbf522ec5403172c7010ee62cadefa315a908460df3e14dc1613a811883d" Oct 08 17:12:50 crc kubenswrapper[4624]: I1008 17:12:50.096920 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8blv"] Oct 08 17:12:50 crc kubenswrapper[4624]: I1008 17:12:50.118014 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8blv"] Oct 08 17:12:50 crc kubenswrapper[4624]: I1008 17:12:50.121179 4624 scope.go:117] "RemoveContainer" containerID="1256d714a4780b4bde0f24fc177f96e5d80a6882f771cc02e6aad6723df73339" Oct 08 17:12:50 crc kubenswrapper[4624]: I1008 17:12:50.178753 4624 scope.go:117] "RemoveContainer" containerID="3c28b1c8de134c5f388af35fe8a2d5c24e036dc35810dd99d9ae9c2f75574efe" Oct 08 17:12:50 crc kubenswrapper[4624]: E1008 17:12:50.179299 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c28b1c8de134c5f388af35fe8a2d5c24e036dc35810dd99d9ae9c2f75574efe\": container with ID starting with 3c28b1c8de134c5f388af35fe8a2d5c24e036dc35810dd99d9ae9c2f75574efe not found: ID does not exist" containerID="3c28b1c8de134c5f388af35fe8a2d5c24e036dc35810dd99d9ae9c2f75574efe" Oct 08 17:12:50 crc kubenswrapper[4624]: I1008 17:12:50.179356 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c28b1c8de134c5f388af35fe8a2d5c24e036dc35810dd99d9ae9c2f75574efe"} err="failed to get container status \"3c28b1c8de134c5f388af35fe8a2d5c24e036dc35810dd99d9ae9c2f75574efe\": rpc error: code = NotFound desc = could not find container \"3c28b1c8de134c5f388af35fe8a2d5c24e036dc35810dd99d9ae9c2f75574efe\": container with ID starting with 3c28b1c8de134c5f388af35fe8a2d5c24e036dc35810dd99d9ae9c2f75574efe not found: ID does not exist" Oct 08 17:12:50 crc kubenswrapper[4624]: I1008 17:12:50.179391 4624 scope.go:117] "RemoveContainer" containerID="b607cbf522ec5403172c7010ee62cadefa315a908460df3e14dc1613a811883d" Oct 08 17:12:50 crc kubenswrapper[4624]: E1008 17:12:50.179765 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b607cbf522ec5403172c7010ee62cadefa315a908460df3e14dc1613a811883d\": container with ID starting with b607cbf522ec5403172c7010ee62cadefa315a908460df3e14dc1613a811883d not found: ID does not exist" containerID="b607cbf522ec5403172c7010ee62cadefa315a908460df3e14dc1613a811883d" Oct 08 17:12:50 crc kubenswrapper[4624]: I1008 17:12:50.179803 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b607cbf522ec5403172c7010ee62cadefa315a908460df3e14dc1613a811883d"} err="failed to get container status \"b607cbf522ec5403172c7010ee62cadefa315a908460df3e14dc1613a811883d\": rpc error: code = NotFound desc = could not find container \"b607cbf522ec5403172c7010ee62cadefa315a908460df3e14dc1613a811883d\": container with ID starting with b607cbf522ec5403172c7010ee62cadefa315a908460df3e14dc1613a811883d not found: ID does not exist" Oct 08 17:12:50 crc kubenswrapper[4624]: I1008 17:12:50.179827 4624 scope.go:117] "RemoveContainer" containerID="1256d714a4780b4bde0f24fc177f96e5d80a6882f771cc02e6aad6723df73339" Oct 08 17:12:50 crc kubenswrapper[4624]: E1008 17:12:50.180145 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1256d714a4780b4bde0f24fc177f96e5d80a6882f771cc02e6aad6723df73339\": container with ID starting with 1256d714a4780b4bde0f24fc177f96e5d80a6882f771cc02e6aad6723df73339 not found: ID does not exist" containerID="1256d714a4780b4bde0f24fc177f96e5d80a6882f771cc02e6aad6723df73339" Oct 08 17:12:50 crc kubenswrapper[4624]: I1008 17:12:50.180172 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1256d714a4780b4bde0f24fc177f96e5d80a6882f771cc02e6aad6723df73339"} err="failed to get container status \"1256d714a4780b4bde0f24fc177f96e5d80a6882f771cc02e6aad6723df73339\": rpc error: code = NotFound desc = could not find container \"1256d714a4780b4bde0f24fc177f96e5d80a6882f771cc02e6aad6723df73339\": container with ID starting with 1256d714a4780b4bde0f24fc177f96e5d80a6882f771cc02e6aad6723df73339 not found: ID does not exist" Oct 08 17:12:51 crc kubenswrapper[4624]: I1008 17:12:51.476109 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c67e3e9c-b20a-413c-9835-35057b2872eb" path="/var/lib/kubelet/pods/c67e3e9c-b20a-413c-9835-35057b2872eb/volumes" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.076059 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.076524 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.232903 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68"] Oct 08 17:15:00 crc kubenswrapper[4624]: E1008 17:15:00.233502 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c67e3e9c-b20a-413c-9835-35057b2872eb" containerName="registry-server" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.233536 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c67e3e9c-b20a-413c-9835-35057b2872eb" containerName="registry-server" Oct 08 17:15:00 crc kubenswrapper[4624]: E1008 17:15:00.233587 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c67e3e9c-b20a-413c-9835-35057b2872eb" containerName="extract-content" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.233599 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c67e3e9c-b20a-413c-9835-35057b2872eb" containerName="extract-content" Oct 08 17:15:00 crc kubenswrapper[4624]: E1008 17:15:00.233619 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c67e3e9c-b20a-413c-9835-35057b2872eb" containerName="extract-utilities" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.233628 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c67e3e9c-b20a-413c-9835-35057b2872eb" containerName="extract-utilities" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.233986 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c67e3e9c-b20a-413c-9835-35057b2872eb" containerName="registry-server" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.235025 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.244171 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68"] Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.255778 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.309034 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.395194 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c493d601-b23d-4ff3-a171-b68bf2af3014-secret-volume\") pod \"collect-profiles-29332395-lkg68\" (UID: \"c493d601-b23d-4ff3-a171-b68bf2af3014\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.395371 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c493d601-b23d-4ff3-a171-b68bf2af3014-config-volume\") pod \"collect-profiles-29332395-lkg68\" (UID: \"c493d601-b23d-4ff3-a171-b68bf2af3014\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.395728 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98529\" (UniqueName: \"kubernetes.io/projected/c493d601-b23d-4ff3-a171-b68bf2af3014-kube-api-access-98529\") pod \"collect-profiles-29332395-lkg68\" (UID: \"c493d601-b23d-4ff3-a171-b68bf2af3014\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.498168 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c493d601-b23d-4ff3-a171-b68bf2af3014-config-volume\") pod \"collect-profiles-29332395-lkg68\" (UID: \"c493d601-b23d-4ff3-a171-b68bf2af3014\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.498520 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98529\" (UniqueName: \"kubernetes.io/projected/c493d601-b23d-4ff3-a171-b68bf2af3014-kube-api-access-98529\") pod \"collect-profiles-29332395-lkg68\" (UID: \"c493d601-b23d-4ff3-a171-b68bf2af3014\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.498657 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c493d601-b23d-4ff3-a171-b68bf2af3014-secret-volume\") pod \"collect-profiles-29332395-lkg68\" (UID: \"c493d601-b23d-4ff3-a171-b68bf2af3014\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.501137 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c493d601-b23d-4ff3-a171-b68bf2af3014-config-volume\") pod \"collect-profiles-29332395-lkg68\" (UID: \"c493d601-b23d-4ff3-a171-b68bf2af3014\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.508429 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c493d601-b23d-4ff3-a171-b68bf2af3014-secret-volume\") pod \"collect-profiles-29332395-lkg68\" (UID: \"c493d601-b23d-4ff3-a171-b68bf2af3014\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.519900 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98529\" (UniqueName: \"kubernetes.io/projected/c493d601-b23d-4ff3-a171-b68bf2af3014-kube-api-access-98529\") pod \"collect-profiles-29332395-lkg68\" (UID: \"c493d601-b23d-4ff3-a171-b68bf2af3014\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68" Oct 08 17:15:00 crc kubenswrapper[4624]: I1008 17:15:00.565754 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68" Oct 08 17:15:01 crc kubenswrapper[4624]: I1008 17:15:01.183683 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68"] Oct 08 17:15:01 crc kubenswrapper[4624]: I1008 17:15:01.386443 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68" event={"ID":"c493d601-b23d-4ff3-a171-b68bf2af3014","Type":"ContainerStarted","Data":"93a35085ff3d0c6f083884620a8679a1db3f5377d183d63999291f148e5af7d2"} Oct 08 17:15:02 crc kubenswrapper[4624]: I1008 17:15:02.399585 4624 generic.go:334] "Generic (PLEG): container finished" podID="c493d601-b23d-4ff3-a171-b68bf2af3014" containerID="645203d7f228b5bdfaaf3c79e0c9ee6f3b9a3adf4727666db8fae552f206a1b8" exitCode=0 Oct 08 17:15:02 crc kubenswrapper[4624]: I1008 17:15:02.399651 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68" event={"ID":"c493d601-b23d-4ff3-a171-b68bf2af3014","Type":"ContainerDied","Data":"645203d7f228b5bdfaaf3c79e0c9ee6f3b9a3adf4727666db8fae552f206a1b8"} Oct 08 17:15:03 crc kubenswrapper[4624]: I1008 17:15:03.852194 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68" Oct 08 17:15:03 crc kubenswrapper[4624]: I1008 17:15:03.977593 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98529\" (UniqueName: \"kubernetes.io/projected/c493d601-b23d-4ff3-a171-b68bf2af3014-kube-api-access-98529\") pod \"c493d601-b23d-4ff3-a171-b68bf2af3014\" (UID: \"c493d601-b23d-4ff3-a171-b68bf2af3014\") " Oct 08 17:15:03 crc kubenswrapper[4624]: I1008 17:15:03.977725 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c493d601-b23d-4ff3-a171-b68bf2af3014-secret-volume\") pod \"c493d601-b23d-4ff3-a171-b68bf2af3014\" (UID: \"c493d601-b23d-4ff3-a171-b68bf2af3014\") " Oct 08 17:15:03 crc kubenswrapper[4624]: I1008 17:15:03.977756 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c493d601-b23d-4ff3-a171-b68bf2af3014-config-volume\") pod \"c493d601-b23d-4ff3-a171-b68bf2af3014\" (UID: \"c493d601-b23d-4ff3-a171-b68bf2af3014\") " Oct 08 17:15:03 crc kubenswrapper[4624]: I1008 17:15:03.979331 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c493d601-b23d-4ff3-a171-b68bf2af3014-config-volume" (OuterVolumeSpecName: "config-volume") pod "c493d601-b23d-4ff3-a171-b68bf2af3014" (UID: "c493d601-b23d-4ff3-a171-b68bf2af3014"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 17:15:03 crc kubenswrapper[4624]: I1008 17:15:03.987456 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c493d601-b23d-4ff3-a171-b68bf2af3014-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c493d601-b23d-4ff3-a171-b68bf2af3014" (UID: "c493d601-b23d-4ff3-a171-b68bf2af3014"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 17:15:03 crc kubenswrapper[4624]: I1008 17:15:03.988148 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c493d601-b23d-4ff3-a171-b68bf2af3014-kube-api-access-98529" (OuterVolumeSpecName: "kube-api-access-98529") pod "c493d601-b23d-4ff3-a171-b68bf2af3014" (UID: "c493d601-b23d-4ff3-a171-b68bf2af3014"). InnerVolumeSpecName "kube-api-access-98529". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:15:04 crc kubenswrapper[4624]: I1008 17:15:04.081094 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98529\" (UniqueName: \"kubernetes.io/projected/c493d601-b23d-4ff3-a171-b68bf2af3014-kube-api-access-98529\") on node \"crc\" DevicePath \"\"" Oct 08 17:15:04 crc kubenswrapper[4624]: I1008 17:15:04.081134 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c493d601-b23d-4ff3-a171-b68bf2af3014-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 17:15:04 crc kubenswrapper[4624]: I1008 17:15:04.081151 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c493d601-b23d-4ff3-a171-b68bf2af3014-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 17:15:04 crc kubenswrapper[4624]: I1008 17:15:04.439045 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68" event={"ID":"c493d601-b23d-4ff3-a171-b68bf2af3014","Type":"ContainerDied","Data":"93a35085ff3d0c6f083884620a8679a1db3f5377d183d63999291f148e5af7d2"} Oct 08 17:15:04 crc kubenswrapper[4624]: I1008 17:15:04.439091 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93a35085ff3d0c6f083884620a8679a1db3f5377d183d63999291f148e5af7d2" Oct 08 17:15:04 crc kubenswrapper[4624]: I1008 17:15:04.439178 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332395-lkg68" Oct 08 17:15:04 crc kubenswrapper[4624]: I1008 17:15:04.970790 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r"] Oct 08 17:15:04 crc kubenswrapper[4624]: I1008 17:15:04.987432 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332350-pbt6r"] Oct 08 17:15:05 crc kubenswrapper[4624]: I1008 17:15:05.486612 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea647041-60ce-41ca-a1b2-872205b6f242" path="/var/lib/kubelet/pods/ea647041-60ce-41ca-a1b2-872205b6f242/volumes" Oct 08 17:15:06 crc kubenswrapper[4624]: I1008 17:15:06.850981 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pn4qd"] Oct 08 17:15:06 crc kubenswrapper[4624]: E1008 17:15:06.851818 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c493d601-b23d-4ff3-a171-b68bf2af3014" containerName="collect-profiles" Oct 08 17:15:06 crc kubenswrapper[4624]: I1008 17:15:06.851834 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c493d601-b23d-4ff3-a171-b68bf2af3014" containerName="collect-profiles" Oct 08 17:15:06 crc kubenswrapper[4624]: I1008 17:15:06.852135 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c493d601-b23d-4ff3-a171-b68bf2af3014" containerName="collect-profiles" Oct 08 17:15:06 crc kubenswrapper[4624]: I1008 17:15:06.855873 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn4qd" Oct 08 17:15:06 crc kubenswrapper[4624]: I1008 17:15:06.887611 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pn4qd"] Oct 08 17:15:06 crc kubenswrapper[4624]: I1008 17:15:06.987792 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0133bb22-c475-45a9-a1dd-4c878ff5d011-utilities\") pod \"community-operators-pn4qd\" (UID: \"0133bb22-c475-45a9-a1dd-4c878ff5d011\") " pod="openshift-marketplace/community-operators-pn4qd" Oct 08 17:15:06 crc kubenswrapper[4624]: I1008 17:15:06.987971 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0133bb22-c475-45a9-a1dd-4c878ff5d011-catalog-content\") pod \"community-operators-pn4qd\" (UID: \"0133bb22-c475-45a9-a1dd-4c878ff5d011\") " pod="openshift-marketplace/community-operators-pn4qd" Oct 08 17:15:06 crc kubenswrapper[4624]: I1008 17:15:06.988034 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb2px\" (UniqueName: \"kubernetes.io/projected/0133bb22-c475-45a9-a1dd-4c878ff5d011-kube-api-access-gb2px\") pod \"community-operators-pn4qd\" (UID: \"0133bb22-c475-45a9-a1dd-4c878ff5d011\") " pod="openshift-marketplace/community-operators-pn4qd" Oct 08 17:15:07 crc kubenswrapper[4624]: I1008 17:15:07.092126 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0133bb22-c475-45a9-a1dd-4c878ff5d011-catalog-content\") pod \"community-operators-pn4qd\" (UID: \"0133bb22-c475-45a9-a1dd-4c878ff5d011\") " pod="openshift-marketplace/community-operators-pn4qd" Oct 08 17:15:07 crc kubenswrapper[4624]: I1008 17:15:07.092231 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb2px\" (UniqueName: \"kubernetes.io/projected/0133bb22-c475-45a9-a1dd-4c878ff5d011-kube-api-access-gb2px\") pod \"community-operators-pn4qd\" (UID: \"0133bb22-c475-45a9-a1dd-4c878ff5d011\") " pod="openshift-marketplace/community-operators-pn4qd" Oct 08 17:15:07 crc kubenswrapper[4624]: I1008 17:15:07.092277 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0133bb22-c475-45a9-a1dd-4c878ff5d011-utilities\") pod \"community-operators-pn4qd\" (UID: \"0133bb22-c475-45a9-a1dd-4c878ff5d011\") " pod="openshift-marketplace/community-operators-pn4qd" Oct 08 17:15:07 crc kubenswrapper[4624]: I1008 17:15:07.092656 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0133bb22-c475-45a9-a1dd-4c878ff5d011-catalog-content\") pod \"community-operators-pn4qd\" (UID: \"0133bb22-c475-45a9-a1dd-4c878ff5d011\") " pod="openshift-marketplace/community-operators-pn4qd" Oct 08 17:15:07 crc kubenswrapper[4624]: I1008 17:15:07.092909 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0133bb22-c475-45a9-a1dd-4c878ff5d011-utilities\") pod \"community-operators-pn4qd\" (UID: \"0133bb22-c475-45a9-a1dd-4c878ff5d011\") " pod="openshift-marketplace/community-operators-pn4qd" Oct 08 17:15:07 crc kubenswrapper[4624]: I1008 17:15:07.118838 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb2px\" (UniqueName: \"kubernetes.io/projected/0133bb22-c475-45a9-a1dd-4c878ff5d011-kube-api-access-gb2px\") pod \"community-operators-pn4qd\" (UID: \"0133bb22-c475-45a9-a1dd-4c878ff5d011\") " pod="openshift-marketplace/community-operators-pn4qd" Oct 08 17:15:07 crc kubenswrapper[4624]: I1008 17:15:07.187333 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn4qd" Oct 08 17:15:07 crc kubenswrapper[4624]: I1008 17:15:07.778294 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pn4qd"] Oct 08 17:15:08 crc kubenswrapper[4624]: I1008 17:15:08.484320 4624 generic.go:334] "Generic (PLEG): container finished" podID="0133bb22-c475-45a9-a1dd-4c878ff5d011" containerID="e279410edafd66d702b811684e3ee892083dc450493fc476df1a8888842261a3" exitCode=0 Oct 08 17:15:08 crc kubenswrapper[4624]: I1008 17:15:08.484424 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn4qd" event={"ID":"0133bb22-c475-45a9-a1dd-4c878ff5d011","Type":"ContainerDied","Data":"e279410edafd66d702b811684e3ee892083dc450493fc476df1a8888842261a3"} Oct 08 17:15:08 crc kubenswrapper[4624]: I1008 17:15:08.486032 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn4qd" event={"ID":"0133bb22-c475-45a9-a1dd-4c878ff5d011","Type":"ContainerStarted","Data":"ba24ebab2b15ddb6c78a55127f4d11685c8a052e13e3f1e0465c9ed614c0e685"} Oct 08 17:15:08 crc kubenswrapper[4624]: I1008 17:15:08.489412 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 17:15:10 crc kubenswrapper[4624]: I1008 17:15:10.524059 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn4qd" event={"ID":"0133bb22-c475-45a9-a1dd-4c878ff5d011","Type":"ContainerStarted","Data":"8e6149bfd97203b2e871dbb547d6384b0b02d6b2c12c9846b20f945e3b3d11fa"} Oct 08 17:15:11 crc kubenswrapper[4624]: I1008 17:15:11.534123 4624 generic.go:334] "Generic (PLEG): container finished" podID="0133bb22-c475-45a9-a1dd-4c878ff5d011" containerID="8e6149bfd97203b2e871dbb547d6384b0b02d6b2c12c9846b20f945e3b3d11fa" exitCode=0 Oct 08 17:15:11 crc kubenswrapper[4624]: I1008 17:15:11.534169 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn4qd" event={"ID":"0133bb22-c475-45a9-a1dd-4c878ff5d011","Type":"ContainerDied","Data":"8e6149bfd97203b2e871dbb547d6384b0b02d6b2c12c9846b20f945e3b3d11fa"} Oct 08 17:15:13 crc kubenswrapper[4624]: I1008 17:15:13.555875 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn4qd" event={"ID":"0133bb22-c475-45a9-a1dd-4c878ff5d011","Type":"ContainerStarted","Data":"28cc0297d7f749fb1a30b8a9d0ae1ce10b655f22c32249dc6828c2433fd1a9d6"} Oct 08 17:15:13 crc kubenswrapper[4624]: I1008 17:15:13.577271 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pn4qd" podStartSLOduration=3.668508049 podStartE2EDuration="7.577251613s" podCreationTimestamp="2025-10-08 17:15:06 +0000 UTC" firstStartedPulling="2025-10-08 17:15:08.487502092 +0000 UTC m=+10333.638437169" lastFinishedPulling="2025-10-08 17:15:12.396245656 +0000 UTC m=+10337.547180733" observedRunningTime="2025-10-08 17:15:13.573701973 +0000 UTC m=+10338.724637050" watchObservedRunningTime="2025-10-08 17:15:13.577251613 +0000 UTC m=+10338.728186680" Oct 08 17:15:17 crc kubenswrapper[4624]: I1008 17:15:17.187509 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pn4qd" Oct 08 17:15:17 crc kubenswrapper[4624]: I1008 17:15:17.188862 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pn4qd" Oct 08 17:15:18 crc kubenswrapper[4624]: I1008 17:15:18.247292 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pn4qd" podUID="0133bb22-c475-45a9-a1dd-4c878ff5d011" containerName="registry-server" probeResult="failure" output=< Oct 08 17:15:18 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 17:15:18 crc kubenswrapper[4624]: > Oct 08 17:15:27 crc kubenswrapper[4624]: I1008 17:15:27.253493 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pn4qd" Oct 08 17:15:27 crc kubenswrapper[4624]: I1008 17:15:27.307481 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pn4qd" Oct 08 17:15:27 crc kubenswrapper[4624]: I1008 17:15:27.496083 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pn4qd"] Oct 08 17:15:28 crc kubenswrapper[4624]: I1008 17:15:28.716559 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pn4qd" podUID="0133bb22-c475-45a9-a1dd-4c878ff5d011" containerName="registry-server" containerID="cri-o://28cc0297d7f749fb1a30b8a9d0ae1ce10b655f22c32249dc6828c2433fd1a9d6" gracePeriod=2 Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.387820 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn4qd" Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.512052 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0133bb22-c475-45a9-a1dd-4c878ff5d011-catalog-content\") pod \"0133bb22-c475-45a9-a1dd-4c878ff5d011\" (UID: \"0133bb22-c475-45a9-a1dd-4c878ff5d011\") " Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.512164 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb2px\" (UniqueName: \"kubernetes.io/projected/0133bb22-c475-45a9-a1dd-4c878ff5d011-kube-api-access-gb2px\") pod \"0133bb22-c475-45a9-a1dd-4c878ff5d011\" (UID: \"0133bb22-c475-45a9-a1dd-4c878ff5d011\") " Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.512235 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0133bb22-c475-45a9-a1dd-4c878ff5d011-utilities\") pod \"0133bb22-c475-45a9-a1dd-4c878ff5d011\" (UID: \"0133bb22-c475-45a9-a1dd-4c878ff5d011\") " Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.513188 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0133bb22-c475-45a9-a1dd-4c878ff5d011-utilities" (OuterVolumeSpecName: "utilities") pod "0133bb22-c475-45a9-a1dd-4c878ff5d011" (UID: "0133bb22-c475-45a9-a1dd-4c878ff5d011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.540365 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0133bb22-c475-45a9-a1dd-4c878ff5d011-kube-api-access-gb2px" (OuterVolumeSpecName: "kube-api-access-gb2px") pod "0133bb22-c475-45a9-a1dd-4c878ff5d011" (UID: "0133bb22-c475-45a9-a1dd-4c878ff5d011"). InnerVolumeSpecName "kube-api-access-gb2px". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.572727 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0133bb22-c475-45a9-a1dd-4c878ff5d011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0133bb22-c475-45a9-a1dd-4c878ff5d011" (UID: "0133bb22-c475-45a9-a1dd-4c878ff5d011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.627121 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb2px\" (UniqueName: \"kubernetes.io/projected/0133bb22-c475-45a9-a1dd-4c878ff5d011-kube-api-access-gb2px\") on node \"crc\" DevicePath \"\"" Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.627198 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0133bb22-c475-45a9-a1dd-4c878ff5d011-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.627220 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0133bb22-c475-45a9-a1dd-4c878ff5d011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.735176 4624 generic.go:334] "Generic (PLEG): container finished" podID="0133bb22-c475-45a9-a1dd-4c878ff5d011" containerID="28cc0297d7f749fb1a30b8a9d0ae1ce10b655f22c32249dc6828c2433fd1a9d6" exitCode=0 Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.735365 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn4qd" Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.735404 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn4qd" event={"ID":"0133bb22-c475-45a9-a1dd-4c878ff5d011","Type":"ContainerDied","Data":"28cc0297d7f749fb1a30b8a9d0ae1ce10b655f22c32249dc6828c2433fd1a9d6"} Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.735788 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn4qd" event={"ID":"0133bb22-c475-45a9-a1dd-4c878ff5d011","Type":"ContainerDied","Data":"ba24ebab2b15ddb6c78a55127f4d11685c8a052e13e3f1e0465c9ed614c0e685"} Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.735822 4624 scope.go:117] "RemoveContainer" containerID="28cc0297d7f749fb1a30b8a9d0ae1ce10b655f22c32249dc6828c2433fd1a9d6" Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.769534 4624 scope.go:117] "RemoveContainer" containerID="8e6149bfd97203b2e871dbb547d6384b0b02d6b2c12c9846b20f945e3b3d11fa" Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.793183 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pn4qd"] Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.797466 4624 scope.go:117] "RemoveContainer" containerID="e279410edafd66d702b811684e3ee892083dc450493fc476df1a8888842261a3" Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.801587 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pn4qd"] Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.847809 4624 scope.go:117] "RemoveContainer" containerID="28cc0297d7f749fb1a30b8a9d0ae1ce10b655f22c32249dc6828c2433fd1a9d6" Oct 08 17:15:29 crc kubenswrapper[4624]: E1008 17:15:29.850129 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28cc0297d7f749fb1a30b8a9d0ae1ce10b655f22c32249dc6828c2433fd1a9d6\": container with ID starting with 28cc0297d7f749fb1a30b8a9d0ae1ce10b655f22c32249dc6828c2433fd1a9d6 not found: ID does not exist" containerID="28cc0297d7f749fb1a30b8a9d0ae1ce10b655f22c32249dc6828c2433fd1a9d6" Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.850337 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28cc0297d7f749fb1a30b8a9d0ae1ce10b655f22c32249dc6828c2433fd1a9d6"} err="failed to get container status \"28cc0297d7f749fb1a30b8a9d0ae1ce10b655f22c32249dc6828c2433fd1a9d6\": rpc error: code = NotFound desc = could not find container \"28cc0297d7f749fb1a30b8a9d0ae1ce10b655f22c32249dc6828c2433fd1a9d6\": container with ID starting with 28cc0297d7f749fb1a30b8a9d0ae1ce10b655f22c32249dc6828c2433fd1a9d6 not found: ID does not exist" Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.850448 4624 scope.go:117] "RemoveContainer" containerID="8e6149bfd97203b2e871dbb547d6384b0b02d6b2c12c9846b20f945e3b3d11fa" Oct 08 17:15:29 crc kubenswrapper[4624]: E1008 17:15:29.851245 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e6149bfd97203b2e871dbb547d6384b0b02d6b2c12c9846b20f945e3b3d11fa\": container with ID starting with 8e6149bfd97203b2e871dbb547d6384b0b02d6b2c12c9846b20f945e3b3d11fa not found: ID does not exist" containerID="8e6149bfd97203b2e871dbb547d6384b0b02d6b2c12c9846b20f945e3b3d11fa" Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.851377 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e6149bfd97203b2e871dbb547d6384b0b02d6b2c12c9846b20f945e3b3d11fa"} err="failed to get container status \"8e6149bfd97203b2e871dbb547d6384b0b02d6b2c12c9846b20f945e3b3d11fa\": rpc error: code = NotFound desc = could not find container \"8e6149bfd97203b2e871dbb547d6384b0b02d6b2c12c9846b20f945e3b3d11fa\": container with ID starting with 8e6149bfd97203b2e871dbb547d6384b0b02d6b2c12c9846b20f945e3b3d11fa not found: ID does not exist" Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.851496 4624 scope.go:117] "RemoveContainer" containerID="e279410edafd66d702b811684e3ee892083dc450493fc476df1a8888842261a3" Oct 08 17:15:29 crc kubenswrapper[4624]: E1008 17:15:29.852149 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e279410edafd66d702b811684e3ee892083dc450493fc476df1a8888842261a3\": container with ID starting with e279410edafd66d702b811684e3ee892083dc450493fc476df1a8888842261a3 not found: ID does not exist" containerID="e279410edafd66d702b811684e3ee892083dc450493fc476df1a8888842261a3" Oct 08 17:15:29 crc kubenswrapper[4624]: I1008 17:15:29.852177 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e279410edafd66d702b811684e3ee892083dc450493fc476df1a8888842261a3"} err="failed to get container status \"e279410edafd66d702b811684e3ee892083dc450493fc476df1a8888842261a3\": rpc error: code = NotFound desc = could not find container \"e279410edafd66d702b811684e3ee892083dc450493fc476df1a8888842261a3\": container with ID starting with e279410edafd66d702b811684e3ee892083dc450493fc476df1a8888842261a3 not found: ID does not exist" Oct 08 17:15:30 crc kubenswrapper[4624]: I1008 17:15:30.075947 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:15:30 crc kubenswrapper[4624]: I1008 17:15:30.076284 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:15:31 crc kubenswrapper[4624]: I1008 17:15:31.488314 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0133bb22-c475-45a9-a1dd-4c878ff5d011" path="/var/lib/kubelet/pods/0133bb22-c475-45a9-a1dd-4c878ff5d011/volumes" Oct 08 17:15:48 crc kubenswrapper[4624]: I1008 17:15:48.055700 4624 scope.go:117] "RemoveContainer" containerID="2e0413c759fe30d8775017dcd9614bc9d5a7d808435391ae614a3213ecaa1cd2" Oct 08 17:16:00 crc kubenswrapper[4624]: I1008 17:16:00.075984 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:16:00 crc kubenswrapper[4624]: I1008 17:16:00.076557 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:16:00 crc kubenswrapper[4624]: I1008 17:16:00.076625 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 17:16:00 crc kubenswrapper[4624]: I1008 17:16:00.077715 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87349138b6a186a60530d29c54a92ba43c8f24ce21ebf028638b018e714656d7"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 17:16:00 crc kubenswrapper[4624]: I1008 17:16:00.077793 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://87349138b6a186a60530d29c54a92ba43c8f24ce21ebf028638b018e714656d7" gracePeriod=600 Oct 08 17:16:01 crc kubenswrapper[4624]: I1008 17:16:01.039165 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="87349138b6a186a60530d29c54a92ba43c8f24ce21ebf028638b018e714656d7" exitCode=0 Oct 08 17:16:01 crc kubenswrapper[4624]: I1008 17:16:01.039253 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"87349138b6a186a60530d29c54a92ba43c8f24ce21ebf028638b018e714656d7"} Oct 08 17:16:01 crc kubenswrapper[4624]: I1008 17:16:01.039627 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee"} Oct 08 17:16:01 crc kubenswrapper[4624]: I1008 17:16:01.039663 4624 scope.go:117] "RemoveContainer" containerID="dc28f311ae903f7c32d02ef2b4c876f943b1b765366465dcc550990efc90cab2" Oct 08 17:18:00 crc kubenswrapper[4624]: I1008 17:18:00.076372 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:18:00 crc kubenswrapper[4624]: I1008 17:18:00.077529 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:18:30 crc kubenswrapper[4624]: I1008 17:18:30.077039 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:18:30 crc kubenswrapper[4624]: I1008 17:18:30.078669 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:19:00 crc kubenswrapper[4624]: I1008 17:19:00.076376 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:19:00 crc kubenswrapper[4624]: I1008 17:19:00.077004 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:19:00 crc kubenswrapper[4624]: I1008 17:19:00.077080 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 17:19:00 crc kubenswrapper[4624]: I1008 17:19:00.078161 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 17:19:00 crc kubenswrapper[4624]: I1008 17:19:00.078235 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" gracePeriod=600 Oct 08 17:19:00 crc kubenswrapper[4624]: E1008 17:19:00.220939 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:19:00 crc kubenswrapper[4624]: I1008 17:19:00.934535 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" exitCode=0 Oct 08 17:19:00 crc kubenswrapper[4624]: I1008 17:19:00.934582 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee"} Oct 08 17:19:00 crc kubenswrapper[4624]: I1008 17:19:00.934684 4624 scope.go:117] "RemoveContainer" containerID="87349138b6a186a60530d29c54a92ba43c8f24ce21ebf028638b018e714656d7" Oct 08 17:19:00 crc kubenswrapper[4624]: I1008 17:19:00.935480 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:19:00 crc kubenswrapper[4624]: E1008 17:19:00.935952 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:19:15 crc kubenswrapper[4624]: I1008 17:19:15.480480 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:19:15 crc kubenswrapper[4624]: E1008 17:19:15.481549 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:19:30 crc kubenswrapper[4624]: I1008 17:19:30.466049 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:19:30 crc kubenswrapper[4624]: E1008 17:19:30.467013 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:19:41 crc kubenswrapper[4624]: I1008 17:19:41.466784 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:19:41 crc kubenswrapper[4624]: E1008 17:19:41.467931 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:19:53 crc kubenswrapper[4624]: I1008 17:19:53.465980 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:19:53 crc kubenswrapper[4624]: E1008 17:19:53.466949 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:20:05 crc kubenswrapper[4624]: I1008 17:20:05.476305 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:20:05 crc kubenswrapper[4624]: E1008 17:20:05.477130 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:20:18 crc kubenswrapper[4624]: I1008 17:20:18.466108 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:20:18 crc kubenswrapper[4624]: E1008 17:20:18.466859 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:20:31 crc kubenswrapper[4624]: I1008 17:20:31.466544 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:20:31 crc kubenswrapper[4624]: E1008 17:20:31.467278 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:20:43 crc kubenswrapper[4624]: I1008 17:20:43.095201 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cnmck"] Oct 08 17:20:43 crc kubenswrapper[4624]: E1008 17:20:43.096392 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0133bb22-c475-45a9-a1dd-4c878ff5d011" containerName="extract-utilities" Oct 08 17:20:43 crc kubenswrapper[4624]: I1008 17:20:43.096410 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0133bb22-c475-45a9-a1dd-4c878ff5d011" containerName="extract-utilities" Oct 08 17:20:43 crc kubenswrapper[4624]: E1008 17:20:43.096438 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0133bb22-c475-45a9-a1dd-4c878ff5d011" containerName="registry-server" Oct 08 17:20:43 crc kubenswrapper[4624]: I1008 17:20:43.096446 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0133bb22-c475-45a9-a1dd-4c878ff5d011" containerName="registry-server" Oct 08 17:20:43 crc kubenswrapper[4624]: E1008 17:20:43.096461 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0133bb22-c475-45a9-a1dd-4c878ff5d011" containerName="extract-content" Oct 08 17:20:43 crc kubenswrapper[4624]: I1008 17:20:43.096470 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0133bb22-c475-45a9-a1dd-4c878ff5d011" containerName="extract-content" Oct 08 17:20:43 crc kubenswrapper[4624]: I1008 17:20:43.096763 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="0133bb22-c475-45a9-a1dd-4c878ff5d011" containerName="registry-server" Oct 08 17:20:43 crc kubenswrapper[4624]: I1008 17:20:43.098582 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnmck" Oct 08 17:20:43 crc kubenswrapper[4624]: I1008 17:20:43.180772 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cnmck"] Oct 08 17:20:43 crc kubenswrapper[4624]: I1008 17:20:43.293370 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73e2f612-5e66-4ff5-952e-f4f5123d02ea-utilities\") pod \"redhat-operators-cnmck\" (UID: \"73e2f612-5e66-4ff5-952e-f4f5123d02ea\") " pod="openshift-marketplace/redhat-operators-cnmck" Oct 08 17:20:43 crc kubenswrapper[4624]: I1008 17:20:43.293493 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z9r7\" (UniqueName: \"kubernetes.io/projected/73e2f612-5e66-4ff5-952e-f4f5123d02ea-kube-api-access-7z9r7\") pod \"redhat-operators-cnmck\" (UID: \"73e2f612-5e66-4ff5-952e-f4f5123d02ea\") " pod="openshift-marketplace/redhat-operators-cnmck" Oct 08 17:20:43 crc kubenswrapper[4624]: I1008 17:20:43.293556 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73e2f612-5e66-4ff5-952e-f4f5123d02ea-catalog-content\") pod \"redhat-operators-cnmck\" (UID: \"73e2f612-5e66-4ff5-952e-f4f5123d02ea\") " pod="openshift-marketplace/redhat-operators-cnmck" Oct 08 17:20:43 crc kubenswrapper[4624]: I1008 17:20:43.395135 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73e2f612-5e66-4ff5-952e-f4f5123d02ea-utilities\") pod \"redhat-operators-cnmck\" (UID: \"73e2f612-5e66-4ff5-952e-f4f5123d02ea\") " pod="openshift-marketplace/redhat-operators-cnmck" Oct 08 17:20:43 crc kubenswrapper[4624]: I1008 17:20:43.395258 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z9r7\" (UniqueName: \"kubernetes.io/projected/73e2f612-5e66-4ff5-952e-f4f5123d02ea-kube-api-access-7z9r7\") pod \"redhat-operators-cnmck\" (UID: \"73e2f612-5e66-4ff5-952e-f4f5123d02ea\") " pod="openshift-marketplace/redhat-operators-cnmck" Oct 08 17:20:43 crc kubenswrapper[4624]: I1008 17:20:43.395341 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73e2f612-5e66-4ff5-952e-f4f5123d02ea-catalog-content\") pod \"redhat-operators-cnmck\" (UID: \"73e2f612-5e66-4ff5-952e-f4f5123d02ea\") " pod="openshift-marketplace/redhat-operators-cnmck" Oct 08 17:20:43 crc kubenswrapper[4624]: I1008 17:20:43.396045 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73e2f612-5e66-4ff5-952e-f4f5123d02ea-utilities\") pod \"redhat-operators-cnmck\" (UID: \"73e2f612-5e66-4ff5-952e-f4f5123d02ea\") " pod="openshift-marketplace/redhat-operators-cnmck" Oct 08 17:20:43 crc kubenswrapper[4624]: I1008 17:20:43.396128 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73e2f612-5e66-4ff5-952e-f4f5123d02ea-catalog-content\") pod \"redhat-operators-cnmck\" (UID: \"73e2f612-5e66-4ff5-952e-f4f5123d02ea\") " pod="openshift-marketplace/redhat-operators-cnmck" Oct 08 17:20:43 crc kubenswrapper[4624]: I1008 17:20:43.425074 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z9r7\" (UniqueName: \"kubernetes.io/projected/73e2f612-5e66-4ff5-952e-f4f5123d02ea-kube-api-access-7z9r7\") pod \"redhat-operators-cnmck\" (UID: \"73e2f612-5e66-4ff5-952e-f4f5123d02ea\") " pod="openshift-marketplace/redhat-operators-cnmck" Oct 08 17:20:43 crc kubenswrapper[4624]: I1008 17:20:43.719666 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnmck" Oct 08 17:20:44 crc kubenswrapper[4624]: I1008 17:20:44.602495 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cnmck"] Oct 08 17:20:45 crc kubenswrapper[4624]: I1008 17:20:45.041305 4624 generic.go:334] "Generic (PLEG): container finished" podID="73e2f612-5e66-4ff5-952e-f4f5123d02ea" containerID="9fbb87eb38502a79502e9137b75a787eac82d1381504ee621ad277049efad3c6" exitCode=0 Oct 08 17:20:45 crc kubenswrapper[4624]: I1008 17:20:45.041392 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnmck" event={"ID":"73e2f612-5e66-4ff5-952e-f4f5123d02ea","Type":"ContainerDied","Data":"9fbb87eb38502a79502e9137b75a787eac82d1381504ee621ad277049efad3c6"} Oct 08 17:20:45 crc kubenswrapper[4624]: I1008 17:20:45.041915 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnmck" event={"ID":"73e2f612-5e66-4ff5-952e-f4f5123d02ea","Type":"ContainerStarted","Data":"61265973bd9fdc519c81c02c0b5a1486ddfd535c32270500e668482ba37316bd"} Oct 08 17:20:45 crc kubenswrapper[4624]: I1008 17:20:45.046686 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 17:20:45 crc kubenswrapper[4624]: I1008 17:20:45.473075 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:20:45 crc kubenswrapper[4624]: E1008 17:20:45.473856 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:20:47 crc kubenswrapper[4624]: I1008 17:20:47.066016 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnmck" event={"ID":"73e2f612-5e66-4ff5-952e-f4f5123d02ea","Type":"ContainerStarted","Data":"9c771d7a4565df6cd50fb4bafb813d1dc21d0c9b4e820446f4a88cd202088b85"} Oct 08 17:20:51 crc kubenswrapper[4624]: I1008 17:20:51.110414 4624 generic.go:334] "Generic (PLEG): container finished" podID="73e2f612-5e66-4ff5-952e-f4f5123d02ea" containerID="9c771d7a4565df6cd50fb4bafb813d1dc21d0c9b4e820446f4a88cd202088b85" exitCode=0 Oct 08 17:20:51 crc kubenswrapper[4624]: I1008 17:20:51.110463 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnmck" event={"ID":"73e2f612-5e66-4ff5-952e-f4f5123d02ea","Type":"ContainerDied","Data":"9c771d7a4565df6cd50fb4bafb813d1dc21d0c9b4e820446f4a88cd202088b85"} Oct 08 17:20:52 crc kubenswrapper[4624]: I1008 17:20:52.125958 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnmck" event={"ID":"73e2f612-5e66-4ff5-952e-f4f5123d02ea","Type":"ContainerStarted","Data":"49a80975b3b8235dc2669db00ca81e00d061cdd19c7ee1633aafbc265431c494"} Oct 08 17:20:52 crc kubenswrapper[4624]: I1008 17:20:52.155272 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cnmck" podStartSLOduration=2.607655707 podStartE2EDuration="9.155249412s" podCreationTimestamp="2025-10-08 17:20:43 +0000 UTC" firstStartedPulling="2025-10-08 17:20:45.046418846 +0000 UTC m=+10670.197353923" lastFinishedPulling="2025-10-08 17:20:51.594012541 +0000 UTC m=+10676.744947628" observedRunningTime="2025-10-08 17:20:52.144469087 +0000 UTC m=+10677.295404174" watchObservedRunningTime="2025-10-08 17:20:52.155249412 +0000 UTC m=+10677.306184489" Oct 08 17:20:53 crc kubenswrapper[4624]: I1008 17:20:53.720928 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cnmck" Oct 08 17:20:53 crc kubenswrapper[4624]: I1008 17:20:53.721409 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cnmck" Oct 08 17:20:54 crc kubenswrapper[4624]: I1008 17:20:54.802708 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cnmck" podUID="73e2f612-5e66-4ff5-952e-f4f5123d02ea" containerName="registry-server" probeResult="failure" output=< Oct 08 17:20:54 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 17:20:54 crc kubenswrapper[4624]: > Oct 08 17:20:57 crc kubenswrapper[4624]: I1008 17:20:57.466169 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:20:57 crc kubenswrapper[4624]: E1008 17:20:57.466853 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:21:04 crc kubenswrapper[4624]: I1008 17:21:04.772962 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cnmck" podUID="73e2f612-5e66-4ff5-952e-f4f5123d02ea" containerName="registry-server" probeResult="failure" output=< Oct 08 17:21:04 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 17:21:04 crc kubenswrapper[4624]: > Oct 08 17:21:12 crc kubenswrapper[4624]: I1008 17:21:12.466775 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:21:12 crc kubenswrapper[4624]: E1008 17:21:12.467804 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:21:15 crc kubenswrapper[4624]: I1008 17:21:15.272879 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cnmck" podUID="73e2f612-5e66-4ff5-952e-f4f5123d02ea" containerName="registry-server" probeResult="failure" output=< Oct 08 17:21:15 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 17:21:15 crc kubenswrapper[4624]: > Oct 08 17:21:17 crc kubenswrapper[4624]: I1008 17:21:17.947411 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5699q"] Oct 08 17:21:17 crc kubenswrapper[4624]: I1008 17:21:17.951366 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5699q" Oct 08 17:21:17 crc kubenswrapper[4624]: I1008 17:21:17.963686 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5699q"] Oct 08 17:21:18 crc kubenswrapper[4624]: I1008 17:21:18.065410 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8xjv\" (UniqueName: \"kubernetes.io/projected/cee06cd4-6d0b-4f4c-8734-b929607ec920-kube-api-access-v8xjv\") pod \"certified-operators-5699q\" (UID: \"cee06cd4-6d0b-4f4c-8734-b929607ec920\") " pod="openshift-marketplace/certified-operators-5699q" Oct 08 17:21:18 crc kubenswrapper[4624]: I1008 17:21:18.065551 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cee06cd4-6d0b-4f4c-8734-b929607ec920-catalog-content\") pod \"certified-operators-5699q\" (UID: \"cee06cd4-6d0b-4f4c-8734-b929607ec920\") " pod="openshift-marketplace/certified-operators-5699q" Oct 08 17:21:18 crc kubenswrapper[4624]: I1008 17:21:18.065584 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cee06cd4-6d0b-4f4c-8734-b929607ec920-utilities\") pod \"certified-operators-5699q\" (UID: \"cee06cd4-6d0b-4f4c-8734-b929607ec920\") " pod="openshift-marketplace/certified-operators-5699q" Oct 08 17:21:18 crc kubenswrapper[4624]: I1008 17:21:18.167751 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8xjv\" (UniqueName: \"kubernetes.io/projected/cee06cd4-6d0b-4f4c-8734-b929607ec920-kube-api-access-v8xjv\") pod \"certified-operators-5699q\" (UID: \"cee06cd4-6d0b-4f4c-8734-b929607ec920\") " pod="openshift-marketplace/certified-operators-5699q" Oct 08 17:21:18 crc kubenswrapper[4624]: I1008 17:21:18.167851 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cee06cd4-6d0b-4f4c-8734-b929607ec920-catalog-content\") pod \"certified-operators-5699q\" (UID: \"cee06cd4-6d0b-4f4c-8734-b929607ec920\") " pod="openshift-marketplace/certified-operators-5699q" Oct 08 17:21:18 crc kubenswrapper[4624]: I1008 17:21:18.167873 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cee06cd4-6d0b-4f4c-8734-b929607ec920-utilities\") pod \"certified-operators-5699q\" (UID: \"cee06cd4-6d0b-4f4c-8734-b929607ec920\") " pod="openshift-marketplace/certified-operators-5699q" Oct 08 17:21:18 crc kubenswrapper[4624]: I1008 17:21:18.170458 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cee06cd4-6d0b-4f4c-8734-b929607ec920-utilities\") pod \"certified-operators-5699q\" (UID: \"cee06cd4-6d0b-4f4c-8734-b929607ec920\") " pod="openshift-marketplace/certified-operators-5699q" Oct 08 17:21:18 crc kubenswrapper[4624]: I1008 17:21:18.171962 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cee06cd4-6d0b-4f4c-8734-b929607ec920-catalog-content\") pod \"certified-operators-5699q\" (UID: \"cee06cd4-6d0b-4f4c-8734-b929607ec920\") " pod="openshift-marketplace/certified-operators-5699q" Oct 08 17:21:18 crc kubenswrapper[4624]: I1008 17:21:18.199476 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8xjv\" (UniqueName: \"kubernetes.io/projected/cee06cd4-6d0b-4f4c-8734-b929607ec920-kube-api-access-v8xjv\") pod \"certified-operators-5699q\" (UID: \"cee06cd4-6d0b-4f4c-8734-b929607ec920\") " pod="openshift-marketplace/certified-operators-5699q" Oct 08 17:21:18 crc kubenswrapper[4624]: I1008 17:21:18.280162 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5699q" Oct 08 17:21:19 crc kubenswrapper[4624]: I1008 17:21:19.032957 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5699q"] Oct 08 17:21:19 crc kubenswrapper[4624]: I1008 17:21:19.390501 4624 generic.go:334] "Generic (PLEG): container finished" podID="cee06cd4-6d0b-4f4c-8734-b929607ec920" containerID="c5b940ff3d66c3c993761752a6ae4c0801ff32f5e43f7926a846c73b3070b866" exitCode=0 Oct 08 17:21:19 crc kubenswrapper[4624]: I1008 17:21:19.390548 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5699q" event={"ID":"cee06cd4-6d0b-4f4c-8734-b929607ec920","Type":"ContainerDied","Data":"c5b940ff3d66c3c993761752a6ae4c0801ff32f5e43f7926a846c73b3070b866"} Oct 08 17:21:19 crc kubenswrapper[4624]: I1008 17:21:19.390576 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5699q" event={"ID":"cee06cd4-6d0b-4f4c-8734-b929607ec920","Type":"ContainerStarted","Data":"1f8b527ddd7ef223dc3436618b4d3d76920e566ce4fe613215fc92fcebbd53e7"} Oct 08 17:21:21 crc kubenswrapper[4624]: I1008 17:21:21.456665 4624 generic.go:334] "Generic (PLEG): container finished" podID="fb994804-3cd4-4414-912f-a01613418132" containerID="821d5abcd4d9a184f0e4ff620434f8a4fae4f0dec44f78753f0464c037c60fbd" exitCode=0 Oct 08 17:21:21 crc kubenswrapper[4624]: I1008 17:21:21.456751 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"fb994804-3cd4-4414-912f-a01613418132","Type":"ContainerDied","Data":"821d5abcd4d9a184f0e4ff620434f8a4fae4f0dec44f78753f0464c037c60fbd"} Oct 08 17:21:23 crc kubenswrapper[4624]: I1008 17:21:23.788265 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cnmck" Oct 08 17:21:23 crc kubenswrapper[4624]: I1008 17:21:23.846591 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cnmck" Oct 08 17:21:24 crc kubenswrapper[4624]: I1008 17:21:24.047519 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cnmck"] Oct 08 17:21:25 crc kubenswrapper[4624]: I1008 17:21:25.502983 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:21:25 crc kubenswrapper[4624]: E1008 17:21:25.503691 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:21:25 crc kubenswrapper[4624]: I1008 17:21:25.511041 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cnmck" podUID="73e2f612-5e66-4ff5-952e-f4f5123d02ea" containerName="registry-server" containerID="cri-o://49a80975b3b8235dc2669db00ca81e00d061cdd19c7ee1633aafbc265431c494" gracePeriod=2 Oct 08 17:21:25 crc kubenswrapper[4624]: I1008 17:21:25.822154 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 17:21:25 crc kubenswrapper[4624]: I1008 17:21:25.981894 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-ca-certs\") pod \"fb994804-3cd4-4414-912f-a01613418132\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " Oct 08 17:21:25 crc kubenswrapper[4624]: I1008 17:21:25.982336 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-ssh-key\") pod \"fb994804-3cd4-4414-912f-a01613418132\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " Oct 08 17:21:25 crc kubenswrapper[4624]: I1008 17:21:25.982454 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fb994804-3cd4-4414-912f-a01613418132-test-operator-ephemeral-workdir\") pod \"fb994804-3cd4-4414-912f-a01613418132\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " Oct 08 17:21:25 crc kubenswrapper[4624]: I1008 17:21:25.982585 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-openstack-config-secret\") pod \"fb994804-3cd4-4414-912f-a01613418132\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " Oct 08 17:21:25 crc kubenswrapper[4624]: I1008 17:21:25.982613 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb994804-3cd4-4414-912f-a01613418132-config-data\") pod \"fb994804-3cd4-4414-912f-a01613418132\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " Oct 08 17:21:25 crc kubenswrapper[4624]: I1008 17:21:25.982660 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fb994804-3cd4-4414-912f-a01613418132-test-operator-ephemeral-temporary\") pod \"fb994804-3cd4-4414-912f-a01613418132\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " Oct 08 17:21:25 crc kubenswrapper[4624]: I1008 17:21:25.982755 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"fb994804-3cd4-4414-912f-a01613418132\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " Oct 08 17:21:25 crc kubenswrapper[4624]: I1008 17:21:25.982853 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nddjc\" (UniqueName: \"kubernetes.io/projected/fb994804-3cd4-4414-912f-a01613418132-kube-api-access-nddjc\") pod \"fb994804-3cd4-4414-912f-a01613418132\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " Oct 08 17:21:25 crc kubenswrapper[4624]: I1008 17:21:25.982941 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb994804-3cd4-4414-912f-a01613418132-openstack-config\") pod \"fb994804-3cd4-4414-912f-a01613418132\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " Oct 08 17:21:25 crc kubenswrapper[4624]: I1008 17:21:25.986126 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb994804-3cd4-4414-912f-a01613418132-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "fb994804-3cd4-4414-912f-a01613418132" (UID: "fb994804-3cd4-4414-912f-a01613418132"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:21:25 crc kubenswrapper[4624]: I1008 17:21:25.989653 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb994804-3cd4-4414-912f-a01613418132-config-data" (OuterVolumeSpecName: "config-data") pod "fb994804-3cd4-4414-912f-a01613418132" (UID: "fb994804-3cd4-4414-912f-a01613418132"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 17:21:25 crc kubenswrapper[4624]: I1008 17:21:25.993869 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb994804-3cd4-4414-912f-a01613418132-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "fb994804-3cd4-4414-912f-a01613418132" (UID: "fb994804-3cd4-4414-912f-a01613418132"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.041988 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "fb994804-3cd4-4414-912f-a01613418132" (UID: "fb994804-3cd4-4414-912f-a01613418132"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.047593 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb994804-3cd4-4414-912f-a01613418132-kube-api-access-nddjc" (OuterVolumeSpecName: "kube-api-access-nddjc") pod "fb994804-3cd4-4414-912f-a01613418132" (UID: "fb994804-3cd4-4414-912f-a01613418132"). InnerVolumeSpecName "kube-api-access-nddjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.086093 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "fb994804-3cd4-4414-912f-a01613418132" (UID: "fb994804-3cd4-4414-912f-a01613418132"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.086342 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fb994804-3cd4-4414-912f-a01613418132" (UID: "fb994804-3cd4-4414-912f-a01613418132"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.086435 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "fb994804-3cd4-4414-912f-a01613418132" (UID: "fb994804-3cd4-4414-912f-a01613418132"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.160405 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb994804-3cd4-4414-912f-a01613418132-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "fb994804-3cd4-4414-912f-a01613418132" (UID: "fb994804-3cd4-4414-912f-a01613418132"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.165553 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb994804-3cd4-4414-912f-a01613418132-openstack-config\") pod \"fb994804-3cd4-4414-912f-a01613418132\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.165666 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-ca-certs\") pod \"fb994804-3cd4-4414-912f-a01613418132\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.165697 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-ssh-key\") pod \"fb994804-3cd4-4414-912f-a01613418132\" (UID: \"fb994804-3cd4-4414-912f-a01613418132\") " Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.166699 4624 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fb994804-3cd4-4414-912f-a01613418132-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.166722 4624 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.166735 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb994804-3cd4-4414-912f-a01613418132-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.166746 4624 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fb994804-3cd4-4414-912f-a01613418132-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 08 17:21:26 crc kubenswrapper[4624]: W1008 17:21:26.166776 4624 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fb994804-3cd4-4414-912f-a01613418132/volumes/kubernetes.io~secret/ca-certs Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.166805 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "fb994804-3cd4-4414-912f-a01613418132" (UID: "fb994804-3cd4-4414-912f-a01613418132"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 17:21:26 crc kubenswrapper[4624]: W1008 17:21:26.166988 4624 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fb994804-3cd4-4414-912f-a01613418132/volumes/kubernetes.io~secret/ssh-key Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.167000 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fb994804-3cd4-4414-912f-a01613418132" (UID: "fb994804-3cd4-4414-912f-a01613418132"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 17:21:26 crc kubenswrapper[4624]: W1008 17:21:26.168356 4624 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fb994804-3cd4-4414-912f-a01613418132/volumes/kubernetes.io~configmap/openstack-config Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.168399 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb994804-3cd4-4414-912f-a01613418132-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "fb994804-3cd4-4414-912f-a01613418132" (UID: "fb994804-3cd4-4414-912f-a01613418132"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.171182 4624 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.171216 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nddjc\" (UniqueName: \"kubernetes.io/projected/fb994804-3cd4-4414-912f-a01613418132-kube-api-access-nddjc\") on node \"crc\" DevicePath \"\"" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.198887 4624 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.273305 4624 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb994804-3cd4-4414-912f-a01613418132-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.273339 4624 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.273348 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb994804-3cd4-4414-912f-a01613418132-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.375695 4624 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.523938 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.523934 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"fb994804-3cd4-4414-912f-a01613418132","Type":"ContainerDied","Data":"c18696c6e69001e8e22282075e5f5251caa264ea17a0b93fe742b339690e5067"} Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.524745 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c18696c6e69001e8e22282075e5f5251caa264ea17a0b93fe742b339690e5067" Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.591975 4624 generic.go:334] "Generic (PLEG): container finished" podID="73e2f612-5e66-4ff5-952e-f4f5123d02ea" containerID="49a80975b3b8235dc2669db00ca81e00d061cdd19c7ee1633aafbc265431c494" exitCode=0 Oct 08 17:21:26 crc kubenswrapper[4624]: I1008 17:21:26.592014 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnmck" event={"ID":"73e2f612-5e66-4ff5-952e-f4f5123d02ea","Type":"ContainerDied","Data":"49a80975b3b8235dc2669db00ca81e00d061cdd19c7ee1633aafbc265431c494"} Oct 08 17:21:27 crc kubenswrapper[4624]: I1008 17:21:27.123408 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnmck" Oct 08 17:21:27 crc kubenswrapper[4624]: I1008 17:21:27.205769 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z9r7\" (UniqueName: \"kubernetes.io/projected/73e2f612-5e66-4ff5-952e-f4f5123d02ea-kube-api-access-7z9r7\") pod \"73e2f612-5e66-4ff5-952e-f4f5123d02ea\" (UID: \"73e2f612-5e66-4ff5-952e-f4f5123d02ea\") " Oct 08 17:21:27 crc kubenswrapper[4624]: I1008 17:21:27.205910 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73e2f612-5e66-4ff5-952e-f4f5123d02ea-catalog-content\") pod \"73e2f612-5e66-4ff5-952e-f4f5123d02ea\" (UID: \"73e2f612-5e66-4ff5-952e-f4f5123d02ea\") " Oct 08 17:21:27 crc kubenswrapper[4624]: I1008 17:21:27.206017 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73e2f612-5e66-4ff5-952e-f4f5123d02ea-utilities\") pod \"73e2f612-5e66-4ff5-952e-f4f5123d02ea\" (UID: \"73e2f612-5e66-4ff5-952e-f4f5123d02ea\") " Oct 08 17:21:27 crc kubenswrapper[4624]: I1008 17:21:27.207216 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73e2f612-5e66-4ff5-952e-f4f5123d02ea-utilities" (OuterVolumeSpecName: "utilities") pod "73e2f612-5e66-4ff5-952e-f4f5123d02ea" (UID: "73e2f612-5e66-4ff5-952e-f4f5123d02ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:21:27 crc kubenswrapper[4624]: I1008 17:21:27.295058 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73e2f612-5e66-4ff5-952e-f4f5123d02ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73e2f612-5e66-4ff5-952e-f4f5123d02ea" (UID: "73e2f612-5e66-4ff5-952e-f4f5123d02ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:21:27 crc kubenswrapper[4624]: I1008 17:21:27.308632 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73e2f612-5e66-4ff5-952e-f4f5123d02ea-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 17:21:27 crc kubenswrapper[4624]: I1008 17:21:27.308686 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73e2f612-5e66-4ff5-952e-f4f5123d02ea-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 17:21:27 crc kubenswrapper[4624]: I1008 17:21:27.613038 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5699q" event={"ID":"cee06cd4-6d0b-4f4c-8734-b929607ec920","Type":"ContainerStarted","Data":"67bd51984561d223e64a60e1db202751f8d82bb61fee95fe8d09e654e7a1712f"} Oct 08 17:21:27 crc kubenswrapper[4624]: I1008 17:21:27.616215 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnmck" Oct 08 17:21:27 crc kubenswrapper[4624]: I1008 17:21:27.616267 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnmck" event={"ID":"73e2f612-5e66-4ff5-952e-f4f5123d02ea","Type":"ContainerDied","Data":"61265973bd9fdc519c81c02c0b5a1486ddfd535c32270500e668482ba37316bd"} Oct 08 17:21:27 crc kubenswrapper[4624]: I1008 17:21:27.616319 4624 scope.go:117] "RemoveContainer" containerID="49a80975b3b8235dc2669db00ca81e00d061cdd19c7ee1633aafbc265431c494" Oct 08 17:21:27 crc kubenswrapper[4624]: I1008 17:21:27.635174 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73e2f612-5e66-4ff5-952e-f4f5123d02ea-kube-api-access-7z9r7" (OuterVolumeSpecName: "kube-api-access-7z9r7") pod "73e2f612-5e66-4ff5-952e-f4f5123d02ea" (UID: "73e2f612-5e66-4ff5-952e-f4f5123d02ea"). InnerVolumeSpecName "kube-api-access-7z9r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:21:27 crc kubenswrapper[4624]: I1008 17:21:27.717715 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z9r7\" (UniqueName: \"kubernetes.io/projected/73e2f612-5e66-4ff5-952e-f4f5123d02ea-kube-api-access-7z9r7\") on node \"crc\" DevicePath \"\"" Oct 08 17:21:27 crc kubenswrapper[4624]: I1008 17:21:27.738281 4624 scope.go:117] "RemoveContainer" containerID="9c771d7a4565df6cd50fb4bafb813d1dc21d0c9b4e820446f4a88cd202088b85" Oct 08 17:21:27 crc kubenswrapper[4624]: I1008 17:21:27.891336 4624 scope.go:117] "RemoveContainer" containerID="9fbb87eb38502a79502e9137b75a787eac82d1381504ee621ad277049efad3c6" Oct 08 17:21:27 crc kubenswrapper[4624]: I1008 17:21:27.998200 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cnmck"] Oct 08 17:21:28 crc kubenswrapper[4624]: I1008 17:21:28.009799 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cnmck"] Oct 08 17:21:28 crc kubenswrapper[4624]: I1008 17:21:28.628040 4624 generic.go:334] "Generic (PLEG): container finished" podID="cee06cd4-6d0b-4f4c-8734-b929607ec920" containerID="67bd51984561d223e64a60e1db202751f8d82bb61fee95fe8d09e654e7a1712f" exitCode=0 Oct 08 17:21:28 crc kubenswrapper[4624]: I1008 17:21:28.628147 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5699q" event={"ID":"cee06cd4-6d0b-4f4c-8734-b929607ec920","Type":"ContainerDied","Data":"67bd51984561d223e64a60e1db202751f8d82bb61fee95fe8d09e654e7a1712f"} Oct 08 17:21:29 crc kubenswrapper[4624]: I1008 17:21:29.484164 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73e2f612-5e66-4ff5-952e-f4f5123d02ea" path="/var/lib/kubelet/pods/73e2f612-5e66-4ff5-952e-f4f5123d02ea/volumes" Oct 08 17:21:30 crc kubenswrapper[4624]: I1008 17:21:30.660207 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5699q" event={"ID":"cee06cd4-6d0b-4f4c-8734-b929607ec920","Type":"ContainerStarted","Data":"735fdfaf7543facbb6cbe7b96608d4051fe805b2d45b379ef7f03ec8b8052ae7"} Oct 08 17:21:30 crc kubenswrapper[4624]: I1008 17:21:30.682393 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5699q" podStartSLOduration=3.57353894 podStartE2EDuration="13.682372308s" podCreationTimestamp="2025-10-08 17:21:17 +0000 UTC" firstStartedPulling="2025-10-08 17:21:19.392163856 +0000 UTC m=+10704.543098933" lastFinishedPulling="2025-10-08 17:21:29.500997214 +0000 UTC m=+10714.651932301" observedRunningTime="2025-10-08 17:21:30.679383882 +0000 UTC m=+10715.830318979" watchObservedRunningTime="2025-10-08 17:21:30.682372308 +0000 UTC m=+10715.833307385" Oct 08 17:21:37 crc kubenswrapper[4624]: I1008 17:21:37.571767 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 08 17:21:37 crc kubenswrapper[4624]: E1008 17:21:37.575087 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb994804-3cd4-4414-912f-a01613418132" containerName="tempest-tests-tempest-tests-runner" Oct 08 17:21:37 crc kubenswrapper[4624]: I1008 17:21:37.575202 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb994804-3cd4-4414-912f-a01613418132" containerName="tempest-tests-tempest-tests-runner" Oct 08 17:21:37 crc kubenswrapper[4624]: E1008 17:21:37.575292 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e2f612-5e66-4ff5-952e-f4f5123d02ea" containerName="extract-utilities" Oct 08 17:21:37 crc kubenswrapper[4624]: I1008 17:21:37.575368 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e2f612-5e66-4ff5-952e-f4f5123d02ea" containerName="extract-utilities" Oct 08 17:21:37 crc kubenswrapper[4624]: E1008 17:21:37.575488 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e2f612-5e66-4ff5-952e-f4f5123d02ea" containerName="registry-server" Oct 08 17:21:37 crc kubenswrapper[4624]: I1008 17:21:37.575562 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e2f612-5e66-4ff5-952e-f4f5123d02ea" containerName="registry-server" Oct 08 17:21:37 crc kubenswrapper[4624]: E1008 17:21:37.575666 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e2f612-5e66-4ff5-952e-f4f5123d02ea" containerName="extract-content" Oct 08 17:21:37 crc kubenswrapper[4624]: I1008 17:21:37.575753 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e2f612-5e66-4ff5-952e-f4f5123d02ea" containerName="extract-content" Oct 08 17:21:37 crc kubenswrapper[4624]: I1008 17:21:37.576130 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="73e2f612-5e66-4ff5-952e-f4f5123d02ea" containerName="registry-server" Oct 08 17:21:37 crc kubenswrapper[4624]: I1008 17:21:37.576227 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb994804-3cd4-4414-912f-a01613418132" containerName="tempest-tests-tempest-tests-runner" Oct 08 17:21:37 crc kubenswrapper[4624]: I1008 17:21:37.577414 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 08 17:21:37 crc kubenswrapper[4624]: I1008 17:21:37.583670 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-24wbd" Oct 08 17:21:37 crc kubenswrapper[4624]: I1008 17:21:37.586057 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 08 17:21:37 crc kubenswrapper[4624]: I1008 17:21:37.662902 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7f1dbdcf-5dcf-41dd-ae17-1683a095921c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 08 17:21:37 crc kubenswrapper[4624]: I1008 17:21:37.663094 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k4qn\" (UniqueName: \"kubernetes.io/projected/7f1dbdcf-5dcf-41dd-ae17-1683a095921c-kube-api-access-9k4qn\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7f1dbdcf-5dcf-41dd-ae17-1683a095921c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 08 17:21:37 crc kubenswrapper[4624]: I1008 17:21:37.765490 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7f1dbdcf-5dcf-41dd-ae17-1683a095921c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 08 17:21:37 crc kubenswrapper[4624]: I1008 17:21:37.765913 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k4qn\" (UniqueName: \"kubernetes.io/projected/7f1dbdcf-5dcf-41dd-ae17-1683a095921c-kube-api-access-9k4qn\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7f1dbdcf-5dcf-41dd-ae17-1683a095921c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 08 17:21:37 crc kubenswrapper[4624]: I1008 17:21:37.767092 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7f1dbdcf-5dcf-41dd-ae17-1683a095921c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 08 17:21:37 crc kubenswrapper[4624]: I1008 17:21:37.790851 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k4qn\" (UniqueName: \"kubernetes.io/projected/7f1dbdcf-5dcf-41dd-ae17-1683a095921c-kube-api-access-9k4qn\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7f1dbdcf-5dcf-41dd-ae17-1683a095921c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 08 17:21:37 crc kubenswrapper[4624]: I1008 17:21:37.794281 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7f1dbdcf-5dcf-41dd-ae17-1683a095921c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 08 17:21:37 crc kubenswrapper[4624]: I1008 17:21:37.905152 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 08 17:21:38 crc kubenswrapper[4624]: I1008 17:21:38.283837 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5699q" Oct 08 17:21:38 crc kubenswrapper[4624]: I1008 17:21:38.284106 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5699q" Oct 08 17:21:38 crc kubenswrapper[4624]: I1008 17:21:38.382811 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5699q" Oct 08 17:21:38 crc kubenswrapper[4624]: I1008 17:21:38.507959 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 08 17:21:38 crc kubenswrapper[4624]: I1008 17:21:38.747879 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"7f1dbdcf-5dcf-41dd-ae17-1683a095921c","Type":"ContainerStarted","Data":"6b222975c3635d9435e64c930ffdbf92871d49d727806a8a9819b6e17f48ed75"} Oct 08 17:21:38 crc kubenswrapper[4624]: I1008 17:21:38.837861 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5699q" Oct 08 17:21:38 crc kubenswrapper[4624]: I1008 17:21:38.923546 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5699q"] Oct 08 17:21:38 crc kubenswrapper[4624]: I1008 17:21:38.986363 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-blsq6"] Oct 08 17:21:38 crc kubenswrapper[4624]: I1008 17:21:38.988041 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-blsq6" podUID="bbbff273-31b6-4f25-bf0e-e90773982c9b" containerName="registry-server" containerID="cri-o://7455260eece3325676a2850e1e2bded4353dbc45efb74b05658adcf7deb7290a" gracePeriod=2 Oct 08 17:21:39 crc kubenswrapper[4624]: I1008 17:21:39.779251 4624 generic.go:334] "Generic (PLEG): container finished" podID="bbbff273-31b6-4f25-bf0e-e90773982c9b" containerID="7455260eece3325676a2850e1e2bded4353dbc45efb74b05658adcf7deb7290a" exitCode=0 Oct 08 17:21:39 crc kubenswrapper[4624]: I1008 17:21:39.779806 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blsq6" event={"ID":"bbbff273-31b6-4f25-bf0e-e90773982c9b","Type":"ContainerDied","Data":"7455260eece3325676a2850e1e2bded4353dbc45efb74b05658adcf7deb7290a"} Oct 08 17:21:39 crc kubenswrapper[4624]: I1008 17:21:39.779883 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blsq6" event={"ID":"bbbff273-31b6-4f25-bf0e-e90773982c9b","Type":"ContainerDied","Data":"bf6889a6a735ddac761acee298c935d3d0fb97fae6be488fc814cb0c19a2a0d2"} Oct 08 17:21:39 crc kubenswrapper[4624]: I1008 17:21:39.779926 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf6889a6a735ddac761acee298c935d3d0fb97fae6be488fc814cb0c19a2a0d2" Oct 08 17:21:39 crc kubenswrapper[4624]: I1008 17:21:39.861161 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-blsq6" Oct 08 17:21:40 crc kubenswrapper[4624]: I1008 17:21:40.013086 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbbff273-31b6-4f25-bf0e-e90773982c9b-utilities\") pod \"bbbff273-31b6-4f25-bf0e-e90773982c9b\" (UID: \"bbbff273-31b6-4f25-bf0e-e90773982c9b\") " Oct 08 17:21:40 crc kubenswrapper[4624]: I1008 17:21:40.013253 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svchx\" (UniqueName: \"kubernetes.io/projected/bbbff273-31b6-4f25-bf0e-e90773982c9b-kube-api-access-svchx\") pod \"bbbff273-31b6-4f25-bf0e-e90773982c9b\" (UID: \"bbbff273-31b6-4f25-bf0e-e90773982c9b\") " Oct 08 17:21:40 crc kubenswrapper[4624]: I1008 17:21:40.013579 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbbff273-31b6-4f25-bf0e-e90773982c9b-utilities" (OuterVolumeSpecName: "utilities") pod "bbbff273-31b6-4f25-bf0e-e90773982c9b" (UID: "bbbff273-31b6-4f25-bf0e-e90773982c9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:21:40 crc kubenswrapper[4624]: I1008 17:21:40.014396 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbbff273-31b6-4f25-bf0e-e90773982c9b-catalog-content\") pod \"bbbff273-31b6-4f25-bf0e-e90773982c9b\" (UID: \"bbbff273-31b6-4f25-bf0e-e90773982c9b\") " Oct 08 17:21:40 crc kubenswrapper[4624]: I1008 17:21:40.015017 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbbff273-31b6-4f25-bf0e-e90773982c9b-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 17:21:40 crc kubenswrapper[4624]: I1008 17:21:40.051860 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbbff273-31b6-4f25-bf0e-e90773982c9b-kube-api-access-svchx" (OuterVolumeSpecName: "kube-api-access-svchx") pod "bbbff273-31b6-4f25-bf0e-e90773982c9b" (UID: "bbbff273-31b6-4f25-bf0e-e90773982c9b"). InnerVolumeSpecName "kube-api-access-svchx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:21:40 crc kubenswrapper[4624]: I1008 17:21:40.069935 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbbff273-31b6-4f25-bf0e-e90773982c9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbbff273-31b6-4f25-bf0e-e90773982c9b" (UID: "bbbff273-31b6-4f25-bf0e-e90773982c9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:21:40 crc kubenswrapper[4624]: I1008 17:21:40.118049 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svchx\" (UniqueName: \"kubernetes.io/projected/bbbff273-31b6-4f25-bf0e-e90773982c9b-kube-api-access-svchx\") on node \"crc\" DevicePath \"\"" Oct 08 17:21:40 crc kubenswrapper[4624]: I1008 17:21:40.118108 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbbff273-31b6-4f25-bf0e-e90773982c9b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 17:21:40 crc kubenswrapper[4624]: I1008 17:21:40.466749 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:21:40 crc kubenswrapper[4624]: E1008 17:21:40.467816 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:21:40 crc kubenswrapper[4624]: I1008 17:21:40.788976 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-blsq6" Oct 08 17:21:40 crc kubenswrapper[4624]: I1008 17:21:40.804118 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"7f1dbdcf-5dcf-41dd-ae17-1683a095921c","Type":"ContainerStarted","Data":"e782c38a9321c3060169a18928a3fec871192ae9b13477a00e0b1b4d34922c4b"} Oct 08 17:21:40 crc kubenswrapper[4624]: I1008 17:21:40.831277 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.042906523 podStartE2EDuration="3.831244116s" podCreationTimestamp="2025-10-08 17:21:37 +0000 UTC" firstStartedPulling="2025-10-08 17:21:38.534615254 +0000 UTC m=+10723.685550331" lastFinishedPulling="2025-10-08 17:21:40.322952847 +0000 UTC m=+10725.473887924" observedRunningTime="2025-10-08 17:21:40.819894937 +0000 UTC m=+10725.970830014" watchObservedRunningTime="2025-10-08 17:21:40.831244116 +0000 UTC m=+10725.982179213" Oct 08 17:21:40 crc kubenswrapper[4624]: I1008 17:21:40.843850 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-blsq6"] Oct 08 17:21:40 crc kubenswrapper[4624]: I1008 17:21:40.851221 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-blsq6"] Oct 08 17:21:41 crc kubenswrapper[4624]: I1008 17:21:41.479836 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbbff273-31b6-4f25-bf0e-e90773982c9b" path="/var/lib/kubelet/pods/bbbff273-31b6-4f25-bf0e-e90773982c9b/volumes" Oct 08 17:21:48 crc kubenswrapper[4624]: I1008 17:21:48.263197 4624 scope.go:117] "RemoveContainer" containerID="57c91cf055eb993e2d8ebeec76b45e5ee21c90bc20422b44fe8cebb2d18bbc9c" Oct 08 17:21:48 crc kubenswrapper[4624]: I1008 17:21:48.325805 4624 scope.go:117] "RemoveContainer" containerID="55ebc53d4afbc7beed39d68a1943304d4db53be63563f3e0d9162e732f3ff3ed" Oct 08 17:21:48 crc kubenswrapper[4624]: I1008 17:21:48.380389 4624 scope.go:117] "RemoveContainer" containerID="7455260eece3325676a2850e1e2bded4353dbc45efb74b05658adcf7deb7290a" Oct 08 17:21:52 crc kubenswrapper[4624]: I1008 17:21:52.465553 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:21:52 crc kubenswrapper[4624]: E1008 17:21:52.466463 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:22:01 crc kubenswrapper[4624]: I1008 17:22:01.592340 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-28zx9/must-gather-xcbd8"] Oct 08 17:22:01 crc kubenswrapper[4624]: E1008 17:22:01.593417 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbbff273-31b6-4f25-bf0e-e90773982c9b" containerName="registry-server" Oct 08 17:22:01 crc kubenswrapper[4624]: I1008 17:22:01.593435 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbbff273-31b6-4f25-bf0e-e90773982c9b" containerName="registry-server" Oct 08 17:22:01 crc kubenswrapper[4624]: E1008 17:22:01.593473 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbbff273-31b6-4f25-bf0e-e90773982c9b" containerName="extract-content" Oct 08 17:22:01 crc kubenswrapper[4624]: I1008 17:22:01.593482 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbbff273-31b6-4f25-bf0e-e90773982c9b" containerName="extract-content" Oct 08 17:22:01 crc kubenswrapper[4624]: E1008 17:22:01.593515 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbbff273-31b6-4f25-bf0e-e90773982c9b" containerName="extract-utilities" Oct 08 17:22:01 crc kubenswrapper[4624]: I1008 17:22:01.593522 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbbff273-31b6-4f25-bf0e-e90773982c9b" containerName="extract-utilities" Oct 08 17:22:01 crc kubenswrapper[4624]: I1008 17:22:01.593801 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbbff273-31b6-4f25-bf0e-e90773982c9b" containerName="registry-server" Oct 08 17:22:01 crc kubenswrapper[4624]: I1008 17:22:01.601440 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-28zx9/must-gather-xcbd8" Oct 08 17:22:01 crc kubenswrapper[4624]: I1008 17:22:01.606094 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-28zx9"/"default-dockercfg-tjktf" Oct 08 17:22:01 crc kubenswrapper[4624]: I1008 17:22:01.606347 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-28zx9"/"kube-root-ca.crt" Oct 08 17:22:01 crc kubenswrapper[4624]: I1008 17:22:01.610323 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-28zx9"/"openshift-service-ca.crt" Oct 08 17:22:01 crc kubenswrapper[4624]: I1008 17:22:01.628670 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-28zx9/must-gather-xcbd8"] Oct 08 17:22:01 crc kubenswrapper[4624]: I1008 17:22:01.633190 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a-must-gather-output\") pod \"must-gather-xcbd8\" (UID: \"e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a\") " pod="openshift-must-gather-28zx9/must-gather-xcbd8" Oct 08 17:22:01 crc kubenswrapper[4624]: I1008 17:22:01.633263 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbp99\" (UniqueName: \"kubernetes.io/projected/e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a-kube-api-access-tbp99\") pod \"must-gather-xcbd8\" (UID: \"e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a\") " pod="openshift-must-gather-28zx9/must-gather-xcbd8" Oct 08 17:22:01 crc kubenswrapper[4624]: I1008 17:22:01.735727 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a-must-gather-output\") pod \"must-gather-xcbd8\" (UID: \"e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a\") " pod="openshift-must-gather-28zx9/must-gather-xcbd8" Oct 08 17:22:01 crc kubenswrapper[4624]: I1008 17:22:01.735784 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbp99\" (UniqueName: \"kubernetes.io/projected/e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a-kube-api-access-tbp99\") pod \"must-gather-xcbd8\" (UID: \"e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a\") " pod="openshift-must-gather-28zx9/must-gather-xcbd8" Oct 08 17:22:01 crc kubenswrapper[4624]: I1008 17:22:01.748114 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a-must-gather-output\") pod \"must-gather-xcbd8\" (UID: \"e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a\") " pod="openshift-must-gather-28zx9/must-gather-xcbd8" Oct 08 17:22:01 crc kubenswrapper[4624]: I1008 17:22:01.805092 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbp99\" (UniqueName: \"kubernetes.io/projected/e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a-kube-api-access-tbp99\") pod \"must-gather-xcbd8\" (UID: \"e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a\") " pod="openshift-must-gather-28zx9/must-gather-xcbd8" Oct 08 17:22:01 crc kubenswrapper[4624]: I1008 17:22:01.929989 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-28zx9/must-gather-xcbd8" Oct 08 17:22:02 crc kubenswrapper[4624]: I1008 17:22:02.961225 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-28zx9/must-gather-xcbd8"] Oct 08 17:22:03 crc kubenswrapper[4624]: I1008 17:22:03.025110 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-28zx9/must-gather-xcbd8" event={"ID":"e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a","Type":"ContainerStarted","Data":"7975ec00df6de689fed536e74e0ed01e920222b66497bffd192edd8af7d5ed84"} Oct 08 17:22:05 crc kubenswrapper[4624]: I1008 17:22:05.477347 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:22:05 crc kubenswrapper[4624]: E1008 17:22:05.480218 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:22:09 crc kubenswrapper[4624]: I1008 17:22:09.101950 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-28zx9/must-gather-xcbd8" event={"ID":"e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a","Type":"ContainerStarted","Data":"400e3a028906a63e65a9968fcc26a571c6955c89305f223c2085f0049f6af394"} Oct 08 17:22:09 crc kubenswrapper[4624]: I1008 17:22:09.102419 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-28zx9/must-gather-xcbd8" event={"ID":"e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a","Type":"ContainerStarted","Data":"98aec9a3dd07ec358a5d19d3b3de1fc690fb791a5c98493de175fcfdd26fdf37"} Oct 08 17:22:09 crc kubenswrapper[4624]: I1008 17:22:09.123386 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-28zx9/must-gather-xcbd8" podStartSLOduration=3.072546104 podStartE2EDuration="8.123357206s" podCreationTimestamp="2025-10-08 17:22:01 +0000 UTC" firstStartedPulling="2025-10-08 17:22:02.982369316 +0000 UTC m=+10748.133304403" lastFinishedPulling="2025-10-08 17:22:08.033180418 +0000 UTC m=+10753.184115505" observedRunningTime="2025-10-08 17:22:09.120127384 +0000 UTC m=+10754.271062461" watchObservedRunningTime="2025-10-08 17:22:09.123357206 +0000 UTC m=+10754.274292283" Oct 08 17:22:15 crc kubenswrapper[4624]: I1008 17:22:15.510016 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-28zx9/crc-debug-hsq28"] Oct 08 17:22:15 crc kubenswrapper[4624]: I1008 17:22:15.513685 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-28zx9/crc-debug-hsq28" Oct 08 17:22:15 crc kubenswrapper[4624]: I1008 17:22:15.580321 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b-host\") pod \"crc-debug-hsq28\" (UID: \"2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b\") " pod="openshift-must-gather-28zx9/crc-debug-hsq28" Oct 08 17:22:15 crc kubenswrapper[4624]: I1008 17:22:15.580374 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhgh2\" (UniqueName: \"kubernetes.io/projected/2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b-kube-api-access-fhgh2\") pod \"crc-debug-hsq28\" (UID: \"2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b\") " pod="openshift-must-gather-28zx9/crc-debug-hsq28" Oct 08 17:22:15 crc kubenswrapper[4624]: I1008 17:22:15.683250 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b-host\") pod \"crc-debug-hsq28\" (UID: \"2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b\") " pod="openshift-must-gather-28zx9/crc-debug-hsq28" Oct 08 17:22:15 crc kubenswrapper[4624]: I1008 17:22:15.683620 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhgh2\" (UniqueName: \"kubernetes.io/projected/2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b-kube-api-access-fhgh2\") pod \"crc-debug-hsq28\" (UID: \"2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b\") " pod="openshift-must-gather-28zx9/crc-debug-hsq28" Oct 08 17:22:15 crc kubenswrapper[4624]: I1008 17:22:15.684445 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b-host\") pod \"crc-debug-hsq28\" (UID: \"2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b\") " pod="openshift-must-gather-28zx9/crc-debug-hsq28" Oct 08 17:22:15 crc kubenswrapper[4624]: I1008 17:22:15.726546 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhgh2\" (UniqueName: \"kubernetes.io/projected/2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b-kube-api-access-fhgh2\") pod \"crc-debug-hsq28\" (UID: \"2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b\") " pod="openshift-must-gather-28zx9/crc-debug-hsq28" Oct 08 17:22:15 crc kubenswrapper[4624]: I1008 17:22:15.843026 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-28zx9/crc-debug-hsq28" Oct 08 17:22:16 crc kubenswrapper[4624]: I1008 17:22:16.167374 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-28zx9/crc-debug-hsq28" event={"ID":"2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b","Type":"ContainerStarted","Data":"c6e23cc6132f574ed12b8779327020dbadc7298581080434c8e52f39b9eeed4a"} Oct 08 17:22:18 crc kubenswrapper[4624]: I1008 17:22:18.466475 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:22:18 crc kubenswrapper[4624]: E1008 17:22:18.466996 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:22:29 crc kubenswrapper[4624]: I1008 17:22:29.315892 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-28zx9/crc-debug-hsq28" event={"ID":"2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b","Type":"ContainerStarted","Data":"d3831331bdaf5b20e0d9e7c67a6459ddde5feb4930861907341b55b2da33d66e"} Oct 08 17:22:31 crc kubenswrapper[4624]: I1008 17:22:31.466344 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:22:31 crc kubenswrapper[4624]: E1008 17:22:31.467096 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:22:44 crc kubenswrapper[4624]: I1008 17:22:44.466114 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:22:44 crc kubenswrapper[4624]: E1008 17:22:44.467079 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:22:55 crc kubenswrapper[4624]: I1008 17:22:55.531100 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:22:55 crc kubenswrapper[4624]: E1008 17:22:55.532398 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:23:07 crc kubenswrapper[4624]: I1008 17:23:07.466039 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:23:07 crc kubenswrapper[4624]: E1008 17:23:07.466890 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:23:20 crc kubenswrapper[4624]: I1008 17:23:20.465776 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:23:20 crc kubenswrapper[4624]: E1008 17:23:20.466608 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:23:32 crc kubenswrapper[4624]: I1008 17:23:32.467395 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:23:32 crc kubenswrapper[4624]: E1008 17:23:32.468538 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:23:44 crc kubenswrapper[4624]: I1008 17:23:44.468075 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:23:44 crc kubenswrapper[4624]: E1008 17:23:44.468684 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:23:44 crc kubenswrapper[4624]: I1008 17:23:44.674554 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-66c67d8-rjnw7_12adc423-1b55-4b56-85a3-32e2aabbc82d/barbican-api/0.log" Oct 08 17:23:44 crc kubenswrapper[4624]: I1008 17:23:44.689982 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-66c67d8-rjnw7_12adc423-1b55-4b56-85a3-32e2aabbc82d/barbican-api-log/0.log" Oct 08 17:23:44 crc kubenswrapper[4624]: I1008 17:23:44.933309 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-db88bb74-6vl6r_255b203d-1921-40ab-8c4f-7f582a647651/barbican-keystone-listener/0.log" Oct 08 17:23:45 crc kubenswrapper[4624]: I1008 17:23:45.223116 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-db88bb74-6vl6r_255b203d-1921-40ab-8c4f-7f582a647651/barbican-keystone-listener-log/0.log" Oct 08 17:23:45 crc kubenswrapper[4624]: I1008 17:23:45.382368 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-769b9cd88f-r425v_2051dd96-3f4a-42b1-9802-602bd9693aec/barbican-worker/0.log" Oct 08 17:23:45 crc kubenswrapper[4624]: I1008 17:23:45.537711 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-769b9cd88f-r425v_2051dd96-3f4a-42b1-9802-602bd9693aec/barbican-worker-log/0.log" Oct 08 17:23:45 crc kubenswrapper[4624]: I1008 17:23:45.678702 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf_f0b620ad-209a-49a8-90cd-f4780a2565a3/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:23:45 crc kubenswrapper[4624]: I1008 17:23:45.975442 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5bfc80be-dbc0-42c4-b493-3b5747d4ccb8/ceilometer-central-agent/0.log" Oct 08 17:23:46 crc kubenswrapper[4624]: I1008 17:23:46.037870 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5bfc80be-dbc0-42c4-b493-3b5747d4ccb8/ceilometer-notification-agent/0.log" Oct 08 17:23:46 crc kubenswrapper[4624]: I1008 17:23:46.166963 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5bfc80be-dbc0-42c4-b493-3b5747d4ccb8/proxy-httpd/0.log" Oct 08 17:23:46 crc kubenswrapper[4624]: I1008 17:23:46.275864 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5bfc80be-dbc0-42c4-b493-3b5747d4ccb8/sg-core/0.log" Oct 08 17:23:46 crc kubenswrapper[4624]: I1008 17:23:46.599816 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5d6f1635-4c52-4761-a2c7-38951659c26e/cinder-api/0.log" Oct 08 17:23:46 crc kubenswrapper[4624]: I1008 17:23:46.612699 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5d6f1635-4c52-4761-a2c7-38951659c26e/cinder-api-log/0.log" Oct 08 17:23:46 crc kubenswrapper[4624]: I1008 17:23:46.953680 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5a500ae8-578e-4045-8bfb-0a658340dc09/cinder-scheduler/0.log" Oct 08 17:23:47 crc kubenswrapper[4624]: I1008 17:23:47.029610 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5a500ae8-578e-4045-8bfb-0a658340dc09/probe/0.log" Oct 08 17:23:47 crc kubenswrapper[4624]: I1008 17:23:47.353467 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl_968537d8-1190-479e-a4cc-92054923d08a/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:23:47 crc kubenswrapper[4624]: I1008 17:23:47.428818 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp_b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:23:47 crc kubenswrapper[4624]: I1008 17:23:47.665814 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-f47x7_5f5a0d83-3c47-439d-9d82-12e0c8afdf45/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:23:47 crc kubenswrapper[4624]: I1008 17:23:47.885751 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-54695ff68c-h4xsn_ca4fad93-41d0-4e9a-8033-fa3ff1a14769/init/0.log" Oct 08 17:23:48 crc kubenswrapper[4624]: I1008 17:23:48.414949 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-54695ff68c-h4xsn_ca4fad93-41d0-4e9a-8033-fa3ff1a14769/init/0.log" Oct 08 17:23:48 crc kubenswrapper[4624]: I1008 17:23:48.516004 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-54695ff68c-h4xsn_ca4fad93-41d0-4e9a-8033-fa3ff1a14769/dnsmasq-dns/0.log" Oct 08 17:23:48 crc kubenswrapper[4624]: I1008 17:23:48.586733 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-z8msh_b35b91af-b986-47f5-a444-bc20763e34ed/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:23:48 crc kubenswrapper[4624]: I1008 17:23:48.704794 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5719b9ea-1496-4097-b86f-39e516f37a0d/glance-httpd/0.log" Oct 08 17:23:48 crc kubenswrapper[4624]: I1008 17:23:48.868939 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5719b9ea-1496-4097-b86f-39e516f37a0d/glance-log/0.log" Oct 08 17:23:49 crc kubenswrapper[4624]: I1008 17:23:49.029008 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1/glance-httpd/0.log" Oct 08 17:23:49 crc kubenswrapper[4624]: I1008 17:23:49.048100 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1/glance-log/0.log" Oct 08 17:23:50 crc kubenswrapper[4624]: I1008 17:23:50.220706 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7674ff4df6-6crwz_93cce62f-6d52-4afd-aa59-e2adac63a30f/heat-engine/0.log" Oct 08 17:23:50 crc kubenswrapper[4624]: I1008 17:23:50.788266 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-67f45f8444-g8bbs_378be2ad-3335-409f-b2eb-60b3997ed4f8/horizon/2.log" Oct 08 17:23:50 crc kubenswrapper[4624]: I1008 17:23:50.873171 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-67f45f8444-g8bbs_378be2ad-3335-409f-b2eb-60b3997ed4f8/horizon/1.log" Oct 08 17:23:50 crc kubenswrapper[4624]: I1008 17:23:50.874450 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-79db6c47d5-q6dxb_a8494534-9935-4f78-9571-b03ff870b8ac/heat-api/0.log" Oct 08 17:23:51 crc kubenswrapper[4624]: I1008 17:23:51.187618 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-c8c76b4d4-k9vfh_bcf01908-e783-4491-8047-ef1053a2b87b/heat-cfnapi/0.log" Oct 08 17:23:51 crc kubenswrapper[4624]: I1008 17:23:51.234000 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p_35e4f3c9-b784-4780-bf62-c44be287ffef/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:23:51 crc kubenswrapper[4624]: I1008 17:23:51.366974 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-28zx9/crc-debug-hsq28" podStartSLOduration=83.566998716 podStartE2EDuration="1m36.362847717s" podCreationTimestamp="2025-10-08 17:22:15 +0000 UTC" firstStartedPulling="2025-10-08 17:22:15.90998921 +0000 UTC m=+10761.060924287" lastFinishedPulling="2025-10-08 17:22:28.705838211 +0000 UTC m=+10773.856773288" observedRunningTime="2025-10-08 17:22:29.330038209 +0000 UTC m=+10774.480973286" watchObservedRunningTime="2025-10-08 17:23:51.362847717 +0000 UTC m=+10856.513782794" Oct 08 17:23:51 crc kubenswrapper[4624]: I1008 17:23:51.408146 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q9dtc"] Oct 08 17:23:51 crc kubenswrapper[4624]: I1008 17:23:51.434069 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q9dtc" Oct 08 17:23:51 crc kubenswrapper[4624]: I1008 17:23:51.583784 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9dtc"] Oct 08 17:23:51 crc kubenswrapper[4624]: I1008 17:23:51.622692 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-262dn\" (UniqueName: \"kubernetes.io/projected/eb8ba03a-f381-40cb-ab81-2787284c2a56-kube-api-access-262dn\") pod \"redhat-marketplace-q9dtc\" (UID: \"eb8ba03a-f381-40cb-ab81-2787284c2a56\") " pod="openshift-marketplace/redhat-marketplace-q9dtc" Oct 08 17:23:51 crc kubenswrapper[4624]: I1008 17:23:51.623335 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8ba03a-f381-40cb-ab81-2787284c2a56-catalog-content\") pod \"redhat-marketplace-q9dtc\" (UID: \"eb8ba03a-f381-40cb-ab81-2787284c2a56\") " pod="openshift-marketplace/redhat-marketplace-q9dtc" Oct 08 17:23:51 crc kubenswrapper[4624]: I1008 17:23:51.623412 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8ba03a-f381-40cb-ab81-2787284c2a56-utilities\") pod \"redhat-marketplace-q9dtc\" (UID: \"eb8ba03a-f381-40cb-ab81-2787284c2a56\") " pod="openshift-marketplace/redhat-marketplace-q9dtc" Oct 08 17:23:51 crc kubenswrapper[4624]: I1008 17:23:51.673379 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-fmk57_43155da2-e389-481e-8e9d-8c219482ba50/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:23:51 crc kubenswrapper[4624]: I1008 17:23:51.725858 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-262dn\" (UniqueName: \"kubernetes.io/projected/eb8ba03a-f381-40cb-ab81-2787284c2a56-kube-api-access-262dn\") pod \"redhat-marketplace-q9dtc\" (UID: \"eb8ba03a-f381-40cb-ab81-2787284c2a56\") " pod="openshift-marketplace/redhat-marketplace-q9dtc" Oct 08 17:23:51 crc kubenswrapper[4624]: I1008 17:23:51.726154 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8ba03a-f381-40cb-ab81-2787284c2a56-catalog-content\") pod \"redhat-marketplace-q9dtc\" (UID: \"eb8ba03a-f381-40cb-ab81-2787284c2a56\") " pod="openshift-marketplace/redhat-marketplace-q9dtc" Oct 08 17:23:51 crc kubenswrapper[4624]: I1008 17:23:51.726883 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8ba03a-f381-40cb-ab81-2787284c2a56-utilities\") pod \"redhat-marketplace-q9dtc\" (UID: \"eb8ba03a-f381-40cb-ab81-2787284c2a56\") " pod="openshift-marketplace/redhat-marketplace-q9dtc" Oct 08 17:23:51 crc kubenswrapper[4624]: I1008 17:23:51.728339 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8ba03a-f381-40cb-ab81-2787284c2a56-utilities\") pod \"redhat-marketplace-q9dtc\" (UID: \"eb8ba03a-f381-40cb-ab81-2787284c2a56\") " pod="openshift-marketplace/redhat-marketplace-q9dtc" Oct 08 17:23:51 crc kubenswrapper[4624]: I1008 17:23:51.733559 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8ba03a-f381-40cb-ab81-2787284c2a56-catalog-content\") pod \"redhat-marketplace-q9dtc\" (UID: \"eb8ba03a-f381-40cb-ab81-2787284c2a56\") " pod="openshift-marketplace/redhat-marketplace-q9dtc" Oct 08 17:23:51 crc kubenswrapper[4624]: I1008 17:23:51.813323 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-262dn\" (UniqueName: \"kubernetes.io/projected/eb8ba03a-f381-40cb-ab81-2787284c2a56-kube-api-access-262dn\") pod \"redhat-marketplace-q9dtc\" (UID: \"eb8ba03a-f381-40cb-ab81-2787284c2a56\") " pod="openshift-marketplace/redhat-marketplace-q9dtc" Oct 08 17:23:51 crc kubenswrapper[4624]: I1008 17:23:51.828363 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q9dtc" Oct 08 17:23:52 crc kubenswrapper[4624]: I1008 17:23:52.172504 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29332261-glg7h_f247107e-e524-46ed-891c-4ceae5377acd/keystone-cron/0.log" Oct 08 17:23:52 crc kubenswrapper[4624]: I1008 17:23:52.470798 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29332321-7k55z_826b0aa9-5c21-4e34-ac07-66eb07b77464/keystone-cron/0.log" Oct 08 17:23:52 crc kubenswrapper[4624]: I1008 17:23:52.754392 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29332381-bs9mx_b977380c-013b-4784-81b2-1387f688506c/keystone-cron/0.log" Oct 08 17:23:52 crc kubenswrapper[4624]: I1008 17:23:52.853302 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-67f45f8444-g8bbs_378be2ad-3335-409f-b2eb-60b3997ed4f8/horizon-log/0.log" Oct 08 17:23:53 crc kubenswrapper[4624]: I1008 17:23:53.101222 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ee24950a-af9c-4e5f-ab36-66c3c5a9cf66/kube-state-metrics/0.log" Oct 08 17:23:53 crc kubenswrapper[4624]: I1008 17:23:53.420854 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v_43842ce6-3b52-41bc-ab12-56e722de00d1/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:23:53 crc kubenswrapper[4624]: I1008 17:23:53.579803 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-65f89d8d74-ng4cv_8c703134-b38a-414e-8c09-5702aa32a638/keystone-api/0.log" Oct 08 17:23:54 crc kubenswrapper[4624]: I1008 17:23:54.265360 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-857d49cc6c-6fh82_ba09f7ef-9520-4917-9fb7-642e8fb51be1/neutron-httpd/0.log" Oct 08 17:23:54 crc kubenswrapper[4624]: I1008 17:23:54.423131 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9dtc"] Oct 08 17:23:54 crc kubenswrapper[4624]: I1008 17:23:54.598228 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x_95840382-8211-4637-92f6-8316e3e751c6/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:23:55 crc kubenswrapper[4624]: I1008 17:23:55.224416 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9dtc" event={"ID":"eb8ba03a-f381-40cb-ab81-2787284c2a56","Type":"ContainerDied","Data":"fd9784db7ad8b934aaf0a0eaffd13af80d4d996b74a06b109e3408380a6496ea"} Oct 08 17:23:55 crc kubenswrapper[4624]: I1008 17:23:55.225519 4624 generic.go:334] "Generic (PLEG): container finished" podID="eb8ba03a-f381-40cb-ab81-2787284c2a56" containerID="fd9784db7ad8b934aaf0a0eaffd13af80d4d996b74a06b109e3408380a6496ea" exitCode=0 Oct 08 17:23:55 crc kubenswrapper[4624]: I1008 17:23:55.225986 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9dtc" event={"ID":"eb8ba03a-f381-40cb-ab81-2787284c2a56","Type":"ContainerStarted","Data":"92168323cff4c485e5cd6c89ba03ee9ebc4ecc2029c799606f25624173abd329"} Oct 08 17:23:55 crc kubenswrapper[4624]: I1008 17:23:55.567742 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:23:55 crc kubenswrapper[4624]: E1008 17:23:55.568060 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:23:55 crc kubenswrapper[4624]: I1008 17:23:55.760831 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-857d49cc6c-6fh82_ba09f7ef-9520-4917-9fb7-642e8fb51be1/neutron-api/0.log" Oct 08 17:23:56 crc kubenswrapper[4624]: I1008 17:23:56.947686 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6/nova-cell0-conductor-conductor/0.log" Oct 08 17:23:57 crc kubenswrapper[4624]: I1008 17:23:57.737974 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5b3bb7e0-38ab-4767-98ed-c1a79a46851f/nova-cell1-conductor-conductor/0.log" Oct 08 17:23:58 crc kubenswrapper[4624]: I1008 17:23:58.269306 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9dtc" event={"ID":"eb8ba03a-f381-40cb-ab81-2787284c2a56","Type":"ContainerStarted","Data":"ede7d7a59555a4d9c3045ec3d2f566cdcc238cc85ef2575356e4bae167cb0c84"} Oct 08 17:23:58 crc kubenswrapper[4624]: I1008 17:23:58.412937 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7cee7d40-61a2-4b4b-87d9-531196e95a8d/nova-api-log/0.log" Oct 08 17:23:58 crc kubenswrapper[4624]: I1008 17:23:58.674664 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_166fd0ae-7c08-4abf-aad9-ec8c11629078/nova-cell1-novncproxy-novncproxy/0.log" Oct 08 17:23:58 crc kubenswrapper[4624]: E1008 17:23:58.763228 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb8ba03a_f381_40cb_ab81_2787284c2a56.slice/crio-conmon-ede7d7a59555a4d9c3045ec3d2f566cdcc238cc85ef2575356e4bae167cb0c84.scope\": RecentStats: unable to find data in memory cache]" Oct 08 17:23:59 crc kubenswrapper[4624]: I1008 17:23:59.024981 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-58j7v_ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b/nova-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:23:59 crc kubenswrapper[4624]: I1008 17:23:59.326114 4624 generic.go:334] "Generic (PLEG): container finished" podID="eb8ba03a-f381-40cb-ab81-2787284c2a56" containerID="ede7d7a59555a4d9c3045ec3d2f566cdcc238cc85ef2575356e4bae167cb0c84" exitCode=0 Oct 08 17:23:59 crc kubenswrapper[4624]: I1008 17:23:59.326172 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9dtc" event={"ID":"eb8ba03a-f381-40cb-ab81-2787284c2a56","Type":"ContainerDied","Data":"ede7d7a59555a4d9c3045ec3d2f566cdcc238cc85ef2575356e4bae167cb0c84"} Oct 08 17:23:59 crc kubenswrapper[4624]: I1008 17:23:59.486823 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cf362794-4a7e-483b-814b-d73b53e9f28f/nova-metadata-log/0.log" Oct 08 17:24:00 crc kubenswrapper[4624]: I1008 17:24:00.018991 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7cee7d40-61a2-4b4b-87d9-531196e95a8d/nova-api-api/0.log" Oct 08 17:24:00 crc kubenswrapper[4624]: I1008 17:24:00.618347 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_91c69013-9ea8-41d8-a439-c85e7ab45e06/mysql-bootstrap/0.log" Oct 08 17:24:00 crc kubenswrapper[4624]: I1008 17:24:00.989769 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_91c69013-9ea8-41d8-a439-c85e7ab45e06/mysql-bootstrap/0.log" Oct 08 17:24:01 crc kubenswrapper[4624]: I1008 17:24:01.230813 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_91c69013-9ea8-41d8-a439-c85e7ab45e06/galera/0.log" Oct 08 17:24:01 crc kubenswrapper[4624]: I1008 17:24:01.641559 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e7163bb9-301b-4539-ae0d-099caa9bd36b/nova-scheduler-scheduler/0.log" Oct 08 17:24:01 crc kubenswrapper[4624]: I1008 17:24:01.828788 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_aeedef5f-f3c5-41a3-9a36-bc3830eb12c7/mysql-bootstrap/0.log" Oct 08 17:24:02 crc kubenswrapper[4624]: I1008 17:24:02.111990 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_aeedef5f-f3c5-41a3-9a36-bc3830eb12c7/mysql-bootstrap/0.log" Oct 08 17:24:02 crc kubenswrapper[4624]: I1008 17:24:02.216111 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_aeedef5f-f3c5-41a3-9a36-bc3830eb12c7/galera/0.log" Oct 08 17:24:02 crc kubenswrapper[4624]: I1008 17:24:02.366003 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9dtc" event={"ID":"eb8ba03a-f381-40cb-ab81-2787284c2a56","Type":"ContainerStarted","Data":"ffd4aba33b26b449b02649240a1893d7b1c5b94eb6be0c8afdbde604a4db4e8e"} Oct 08 17:24:02 crc kubenswrapper[4624]: I1008 17:24:02.393071 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q9dtc" podStartSLOduration=5.956941491 podStartE2EDuration="11.393049614s" podCreationTimestamp="2025-10-08 17:23:51 +0000 UTC" firstStartedPulling="2025-10-08 17:23:56.236618381 +0000 UTC m=+10861.387553458" lastFinishedPulling="2025-10-08 17:24:01.672726494 +0000 UTC m=+10866.823661581" observedRunningTime="2025-10-08 17:24:02.390417497 +0000 UTC m=+10867.541352584" watchObservedRunningTime="2025-10-08 17:24:02.393049614 +0000 UTC m=+10867.543984691" Oct 08 17:24:02 crc kubenswrapper[4624]: I1008 17:24:02.540655 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c515985c-9b57-4136-bf01-b872e9caaec9/openstackclient/0.log" Oct 08 17:24:02 crc kubenswrapper[4624]: I1008 17:24:02.889731 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-c4zfm_c5312bac-042b-48c5-bf82-1f565e25f11e/ovn-controller/0.log" Oct 08 17:24:03 crc kubenswrapper[4624]: I1008 17:24:03.239573 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-bm82v_0eec343f-e477-47d8-b651-2f5a2a944895/openstack-network-exporter/0.log" Oct 08 17:24:03 crc kubenswrapper[4624]: I1008 17:24:03.593859 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jhpjx_72a8ef8e-f40d-4206-af95-2f636093ed51/ovsdb-server-init/0.log" Oct 08 17:24:03 crc kubenswrapper[4624]: I1008 17:24:03.934458 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jhpjx_72a8ef8e-f40d-4206-af95-2f636093ed51/ovs-vswitchd/0.log" Oct 08 17:24:03 crc kubenswrapper[4624]: I1008 17:24:03.949760 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jhpjx_72a8ef8e-f40d-4206-af95-2f636093ed51/ovsdb-server-init/0.log" Oct 08 17:24:04 crc kubenswrapper[4624]: I1008 17:24:04.299045 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jhpjx_72a8ef8e-f40d-4206-af95-2f636093ed51/ovsdb-server/0.log" Oct 08 17:24:04 crc kubenswrapper[4624]: I1008 17:24:04.589229 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-t84hz_d6b31442-c459-4d7e-b828-90ffe6a2eda5/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:24:04 crc kubenswrapper[4624]: I1008 17:24:04.910068 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_226458e6-33a0-4123-8aaf-b3950a30d1c9/openstack-network-exporter/0.log" Oct 08 17:24:04 crc kubenswrapper[4624]: I1008 17:24:04.923054 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_226458e6-33a0-4123-8aaf-b3950a30d1c9/ovn-northd/0.log" Oct 08 17:24:05 crc kubenswrapper[4624]: I1008 17:24:05.069033 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_5c8ff67a-4da2-47d4-9f73-d7842cdf2712/memcached/0.log" Oct 08 17:24:05 crc kubenswrapper[4624]: I1008 17:24:05.264734 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_85e0be3f-32a4-42c9-9fe5-f3bfca740477/openstack-network-exporter/0.log" Oct 08 17:24:05 crc kubenswrapper[4624]: I1008 17:24:05.375879 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_85e0be3f-32a4-42c9-9fe5-f3bfca740477/ovsdbserver-nb/0.log" Oct 08 17:24:05 crc kubenswrapper[4624]: I1008 17:24:05.429889 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cf362794-4a7e-483b-814b-d73b53e9f28f/nova-metadata-metadata/0.log" Oct 08 17:24:05 crc kubenswrapper[4624]: I1008 17:24:05.541971 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2799503e-5a0b-4631-9946-f335b8446b53/openstack-network-exporter/0.log" Oct 08 17:24:05 crc kubenswrapper[4624]: I1008 17:24:05.640322 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2799503e-5a0b-4631-9946-f335b8446b53/ovsdbserver-sb/0.log" Oct 08 17:24:06 crc kubenswrapper[4624]: I1008 17:24:06.231880 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0795aa07-68f2-4a23-b388-1237f212f537/setup-container/0.log" Oct 08 17:24:06 crc kubenswrapper[4624]: I1008 17:24:06.268733 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7f4595c6f6-t4nns_cfd45b9a-e3a8-4f53-92bb-4e4dc0580365/placement-api/0.log" Oct 08 17:24:06 crc kubenswrapper[4624]: I1008 17:24:06.561347 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7f4595c6f6-t4nns_cfd45b9a-e3a8-4f53-92bb-4e4dc0580365/placement-log/0.log" Oct 08 17:24:06 crc kubenswrapper[4624]: I1008 17:24:06.606669 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0795aa07-68f2-4a23-b388-1237f212f537/setup-container/0.log" Oct 08 17:24:06 crc kubenswrapper[4624]: I1008 17:24:06.614377 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0795aa07-68f2-4a23-b388-1237f212f537/rabbitmq/0.log" Oct 08 17:24:06 crc kubenswrapper[4624]: I1008 17:24:06.787010 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9da75813-8748-41a3-8bea-bc7987ccc7a5/setup-container/0.log" Oct 08 17:24:07 crc kubenswrapper[4624]: I1008 17:24:07.002387 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9da75813-8748-41a3-8bea-bc7987ccc7a5/setup-container/0.log" Oct 08 17:24:07 crc kubenswrapper[4624]: I1008 17:24:07.006495 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9da75813-8748-41a3-8bea-bc7987ccc7a5/rabbitmq/0.log" Oct 08 17:24:07 crc kubenswrapper[4624]: I1008 17:24:07.079838 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz_9a215cb1-d735-42d4-9cc9-698fa1a61508/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:24:07 crc kubenswrapper[4624]: I1008 17:24:07.235058 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-9rfls_eda4ead7-208d-4aed-9f74-ef58b401d591/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:24:07 crc kubenswrapper[4624]: I1008 17:24:07.346727 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8_89f05ed9-3980-46e8-96b7-ef08d01f09ee/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:24:07 crc kubenswrapper[4624]: I1008 17:24:07.618018 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-n95kc_054c88fe-e5ae-4274-b497-a3e583b40594/ssh-known-hosts-edpm-deployment/0.log" Oct 08 17:24:07 crc kubenswrapper[4624]: I1008 17:24:07.624820 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-hcl7k_e856333d-f205-4d9a-881c-5b3364b5ddb5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:24:07 crc kubenswrapper[4624]: I1008 17:24:07.912416 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-59d86bf959-vq2ld_d7c08e42-5aca-4394-952c-5649ba096a8f/proxy-server/0.log" Oct 08 17:24:08 crc kubenswrapper[4624]: I1008 17:24:08.132713 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-v4bjq_d112c8ce-f2c4-43a1-9ae8-e155473d5831/swift-ring-rebalance/0.log" Oct 08 17:24:08 crc kubenswrapper[4624]: I1008 17:24:08.212364 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-59d86bf959-vq2ld_d7c08e42-5aca-4394-952c-5649ba096a8f/proxy-httpd/0.log" Oct 08 17:24:08 crc kubenswrapper[4624]: I1008 17:24:08.465705 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:24:08 crc kubenswrapper[4624]: I1008 17:24:08.605144 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/account-auditor/0.log" Oct 08 17:24:08 crc kubenswrapper[4624]: I1008 17:24:08.715997 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/account-reaper/0.log" Oct 08 17:24:08 crc kubenswrapper[4624]: I1008 17:24:08.879627 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/account-server/0.log" Oct 08 17:24:08 crc kubenswrapper[4624]: I1008 17:24:08.905147 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/account-replicator/0.log" Oct 08 17:24:08 crc kubenswrapper[4624]: I1008 17:24:08.922715 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/container-auditor/0.log" Oct 08 17:24:09 crc kubenswrapper[4624]: I1008 17:24:09.031980 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/container-replicator/0.log" Oct 08 17:24:09 crc kubenswrapper[4624]: I1008 17:24:09.182761 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/container-server/0.log" Oct 08 17:24:09 crc kubenswrapper[4624]: I1008 17:24:09.267622 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/container-updater/0.log" Oct 08 17:24:09 crc kubenswrapper[4624]: I1008 17:24:09.347951 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/object-auditor/0.log" Oct 08 17:24:09 crc kubenswrapper[4624]: I1008 17:24:09.436550 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"cdc8e4c2153c0bad978a82ddbd7fd3a6946447cc4f4f7ca7442f9de17e890a9c"} Oct 08 17:24:09 crc kubenswrapper[4624]: I1008 17:24:09.586261 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/object-expirer/0.log" Oct 08 17:24:09 crc kubenswrapper[4624]: I1008 17:24:09.630590 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/object-replicator/0.log" Oct 08 17:24:09 crc kubenswrapper[4624]: I1008 17:24:09.847604 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/object-updater/0.log" Oct 08 17:24:09 crc kubenswrapper[4624]: I1008 17:24:09.876477 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/object-server/0.log" Oct 08 17:24:09 crc kubenswrapper[4624]: I1008 17:24:09.998218 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/rsync/0.log" Oct 08 17:24:10 crc kubenswrapper[4624]: I1008 17:24:10.008742 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/swift-recon-cron/0.log" Oct 08 17:24:10 crc kubenswrapper[4624]: I1008 17:24:10.141105 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-lccvq_9974fb02-7840-402c-af16-db4392849c73/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:24:10 crc kubenswrapper[4624]: I1008 17:24:10.443841 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-multi-thread-testing_391ff9a0-631c-4520-a9f9-80fda37e32a1/tempest-tests-tempest-tests-runner/0.log" Oct 08 17:24:10 crc kubenswrapper[4624]: I1008 17:24:10.453046 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-thread-testing_fb994804-3cd4-4414-912f-a01613418132/tempest-tests-tempest-tests-runner/0.log" Oct 08 17:24:10 crc kubenswrapper[4624]: I1008 17:24:10.787915 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-cbght_ceac8d98-9a63-4c7d-876b-8d7e4acf59c4/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:24:10 crc kubenswrapper[4624]: I1008 17:24:10.804589 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_7f1dbdcf-5dcf-41dd-ae17-1683a095921c/test-operator-logs-container/0.log" Oct 08 17:24:11 crc kubenswrapper[4624]: I1008 17:24:11.828772 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q9dtc" Oct 08 17:24:11 crc kubenswrapper[4624]: I1008 17:24:11.830394 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q9dtc" Oct 08 17:24:11 crc kubenswrapper[4624]: I1008 17:24:11.881858 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q9dtc" Oct 08 17:24:12 crc kubenswrapper[4624]: I1008 17:24:12.548923 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q9dtc" Oct 08 17:24:12 crc kubenswrapper[4624]: I1008 17:24:12.618674 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9dtc"] Oct 08 17:24:14 crc kubenswrapper[4624]: I1008 17:24:14.485300 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q9dtc" podUID="eb8ba03a-f381-40cb-ab81-2787284c2a56" containerName="registry-server" containerID="cri-o://ffd4aba33b26b449b02649240a1893d7b1c5b94eb6be0c8afdbde604a4db4e8e" gracePeriod=2 Oct 08 17:24:14 crc kubenswrapper[4624]: I1008 17:24:14.987024 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q9dtc" Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.094818 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-262dn\" (UniqueName: \"kubernetes.io/projected/eb8ba03a-f381-40cb-ab81-2787284c2a56-kube-api-access-262dn\") pod \"eb8ba03a-f381-40cb-ab81-2787284c2a56\" (UID: \"eb8ba03a-f381-40cb-ab81-2787284c2a56\") " Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.095333 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8ba03a-f381-40cb-ab81-2787284c2a56-catalog-content\") pod \"eb8ba03a-f381-40cb-ab81-2787284c2a56\" (UID: \"eb8ba03a-f381-40cb-ab81-2787284c2a56\") " Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.095716 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8ba03a-f381-40cb-ab81-2787284c2a56-utilities\") pod \"eb8ba03a-f381-40cb-ab81-2787284c2a56\" (UID: \"eb8ba03a-f381-40cb-ab81-2787284c2a56\") " Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.099423 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb8ba03a-f381-40cb-ab81-2787284c2a56-utilities" (OuterVolumeSpecName: "utilities") pod "eb8ba03a-f381-40cb-ab81-2787284c2a56" (UID: "eb8ba03a-f381-40cb-ab81-2787284c2a56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.108761 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb8ba03a-f381-40cb-ab81-2787284c2a56-kube-api-access-262dn" (OuterVolumeSpecName: "kube-api-access-262dn") pod "eb8ba03a-f381-40cb-ab81-2787284c2a56" (UID: "eb8ba03a-f381-40cb-ab81-2787284c2a56"). InnerVolumeSpecName "kube-api-access-262dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.136514 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb8ba03a-f381-40cb-ab81-2787284c2a56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb8ba03a-f381-40cb-ab81-2787284c2a56" (UID: "eb8ba03a-f381-40cb-ab81-2787284c2a56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.198350 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8ba03a-f381-40cb-ab81-2787284c2a56-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.198397 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8ba03a-f381-40cb-ab81-2787284c2a56-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.198413 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-262dn\" (UniqueName: \"kubernetes.io/projected/eb8ba03a-f381-40cb-ab81-2787284c2a56-kube-api-access-262dn\") on node \"crc\" DevicePath \"\"" Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.529876 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q9dtc" Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.530031 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9dtc" event={"ID":"eb8ba03a-f381-40cb-ab81-2787284c2a56","Type":"ContainerDied","Data":"ffd4aba33b26b449b02649240a1893d7b1c5b94eb6be0c8afdbde604a4db4e8e"} Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.530120 4624 scope.go:117] "RemoveContainer" containerID="ffd4aba33b26b449b02649240a1893d7b1c5b94eb6be0c8afdbde604a4db4e8e" Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.532738 4624 generic.go:334] "Generic (PLEG): container finished" podID="eb8ba03a-f381-40cb-ab81-2787284c2a56" containerID="ffd4aba33b26b449b02649240a1893d7b1c5b94eb6be0c8afdbde604a4db4e8e" exitCode=0 Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.532804 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9dtc" event={"ID":"eb8ba03a-f381-40cb-ab81-2787284c2a56","Type":"ContainerDied","Data":"92168323cff4c485e5cd6c89ba03ee9ebc4ecc2029c799606f25624173abd329"} Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.567215 4624 scope.go:117] "RemoveContainer" containerID="ede7d7a59555a4d9c3045ec3d2f566cdcc238cc85ef2575356e4bae167cb0c84" Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.607738 4624 scope.go:117] "RemoveContainer" containerID="fd9784db7ad8b934aaf0a0eaffd13af80d4d996b74a06b109e3408380a6496ea" Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.630238 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9dtc"] Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.682152 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9dtc"] Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.725851 4624 scope.go:117] "RemoveContainer" containerID="ffd4aba33b26b449b02649240a1893d7b1c5b94eb6be0c8afdbde604a4db4e8e" Oct 08 17:24:15 crc kubenswrapper[4624]: E1008 17:24:15.729353 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffd4aba33b26b449b02649240a1893d7b1c5b94eb6be0c8afdbde604a4db4e8e\": container with ID starting with ffd4aba33b26b449b02649240a1893d7b1c5b94eb6be0c8afdbde604a4db4e8e not found: ID does not exist" containerID="ffd4aba33b26b449b02649240a1893d7b1c5b94eb6be0c8afdbde604a4db4e8e" Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.731153 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffd4aba33b26b449b02649240a1893d7b1c5b94eb6be0c8afdbde604a4db4e8e"} err="failed to get container status \"ffd4aba33b26b449b02649240a1893d7b1c5b94eb6be0c8afdbde604a4db4e8e\": rpc error: code = NotFound desc = could not find container \"ffd4aba33b26b449b02649240a1893d7b1c5b94eb6be0c8afdbde604a4db4e8e\": container with ID starting with ffd4aba33b26b449b02649240a1893d7b1c5b94eb6be0c8afdbde604a4db4e8e not found: ID does not exist" Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.731197 4624 scope.go:117] "RemoveContainer" containerID="ede7d7a59555a4d9c3045ec3d2f566cdcc238cc85ef2575356e4bae167cb0c84" Oct 08 17:24:15 crc kubenswrapper[4624]: E1008 17:24:15.732333 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ede7d7a59555a4d9c3045ec3d2f566cdcc238cc85ef2575356e4bae167cb0c84\": container with ID starting with ede7d7a59555a4d9c3045ec3d2f566cdcc238cc85ef2575356e4bae167cb0c84 not found: ID does not exist" containerID="ede7d7a59555a4d9c3045ec3d2f566cdcc238cc85ef2575356e4bae167cb0c84" Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.732407 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ede7d7a59555a4d9c3045ec3d2f566cdcc238cc85ef2575356e4bae167cb0c84"} err="failed to get container status \"ede7d7a59555a4d9c3045ec3d2f566cdcc238cc85ef2575356e4bae167cb0c84\": rpc error: code = NotFound desc = could not find container \"ede7d7a59555a4d9c3045ec3d2f566cdcc238cc85ef2575356e4bae167cb0c84\": container with ID starting with ede7d7a59555a4d9c3045ec3d2f566cdcc238cc85ef2575356e4bae167cb0c84 not found: ID does not exist" Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.732423 4624 scope.go:117] "RemoveContainer" containerID="fd9784db7ad8b934aaf0a0eaffd13af80d4d996b74a06b109e3408380a6496ea" Oct 08 17:24:15 crc kubenswrapper[4624]: E1008 17:24:15.739385 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd9784db7ad8b934aaf0a0eaffd13af80d4d996b74a06b109e3408380a6496ea\": container with ID starting with fd9784db7ad8b934aaf0a0eaffd13af80d4d996b74a06b109e3408380a6496ea not found: ID does not exist" containerID="fd9784db7ad8b934aaf0a0eaffd13af80d4d996b74a06b109e3408380a6496ea" Oct 08 17:24:15 crc kubenswrapper[4624]: I1008 17:24:15.739458 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd9784db7ad8b934aaf0a0eaffd13af80d4d996b74a06b109e3408380a6496ea"} err="failed to get container status \"fd9784db7ad8b934aaf0a0eaffd13af80d4d996b74a06b109e3408380a6496ea\": rpc error: code = NotFound desc = could not find container \"fd9784db7ad8b934aaf0a0eaffd13af80d4d996b74a06b109e3408380a6496ea\": container with ID starting with fd9784db7ad8b934aaf0a0eaffd13af80d4d996b74a06b109e3408380a6496ea not found: ID does not exist" Oct 08 17:24:17 crc kubenswrapper[4624]: I1008 17:24:17.479165 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb8ba03a-f381-40cb-ab81-2787284c2a56" path="/var/lib/kubelet/pods/eb8ba03a-f381-40cb-ab81-2787284c2a56/volumes" Oct 08 17:25:23 crc kubenswrapper[4624]: I1008 17:25:23.990917 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xsnbj"] Oct 08 17:25:23 crc kubenswrapper[4624]: E1008 17:25:23.994986 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8ba03a-f381-40cb-ab81-2787284c2a56" containerName="extract-utilities" Oct 08 17:25:23 crc kubenswrapper[4624]: I1008 17:25:23.995022 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8ba03a-f381-40cb-ab81-2787284c2a56" containerName="extract-utilities" Oct 08 17:25:23 crc kubenswrapper[4624]: E1008 17:25:23.995070 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8ba03a-f381-40cb-ab81-2787284c2a56" containerName="registry-server" Oct 08 17:25:23 crc kubenswrapper[4624]: I1008 17:25:23.995081 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8ba03a-f381-40cb-ab81-2787284c2a56" containerName="registry-server" Oct 08 17:25:23 crc kubenswrapper[4624]: E1008 17:25:23.995104 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8ba03a-f381-40cb-ab81-2787284c2a56" containerName="extract-content" Oct 08 17:25:23 crc kubenswrapper[4624]: I1008 17:25:23.995112 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8ba03a-f381-40cb-ab81-2787284c2a56" containerName="extract-content" Oct 08 17:25:23 crc kubenswrapper[4624]: I1008 17:25:23.995488 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8ba03a-f381-40cb-ab81-2787284c2a56" containerName="registry-server" Oct 08 17:25:23 crc kubenswrapper[4624]: I1008 17:25:23.998148 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsnbj" Oct 08 17:25:24 crc kubenswrapper[4624]: I1008 17:25:24.013360 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xsnbj"] Oct 08 17:25:24 crc kubenswrapper[4624]: I1008 17:25:24.072167 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp544\" (UniqueName: \"kubernetes.io/projected/f5977875-3b1a-404c-944d-0e228843c730-kube-api-access-cp544\") pod \"community-operators-xsnbj\" (UID: \"f5977875-3b1a-404c-944d-0e228843c730\") " pod="openshift-marketplace/community-operators-xsnbj" Oct 08 17:25:24 crc kubenswrapper[4624]: I1008 17:25:24.072213 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5977875-3b1a-404c-944d-0e228843c730-utilities\") pod \"community-operators-xsnbj\" (UID: \"f5977875-3b1a-404c-944d-0e228843c730\") " pod="openshift-marketplace/community-operators-xsnbj" Oct 08 17:25:24 crc kubenswrapper[4624]: I1008 17:25:24.072237 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5977875-3b1a-404c-944d-0e228843c730-catalog-content\") pod \"community-operators-xsnbj\" (UID: \"f5977875-3b1a-404c-944d-0e228843c730\") " pod="openshift-marketplace/community-operators-xsnbj" Oct 08 17:25:24 crc kubenswrapper[4624]: I1008 17:25:24.174152 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp544\" (UniqueName: \"kubernetes.io/projected/f5977875-3b1a-404c-944d-0e228843c730-kube-api-access-cp544\") pod \"community-operators-xsnbj\" (UID: \"f5977875-3b1a-404c-944d-0e228843c730\") " pod="openshift-marketplace/community-operators-xsnbj" Oct 08 17:25:24 crc kubenswrapper[4624]: I1008 17:25:24.174207 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5977875-3b1a-404c-944d-0e228843c730-utilities\") pod \"community-operators-xsnbj\" (UID: \"f5977875-3b1a-404c-944d-0e228843c730\") " pod="openshift-marketplace/community-operators-xsnbj" Oct 08 17:25:24 crc kubenswrapper[4624]: I1008 17:25:24.174227 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5977875-3b1a-404c-944d-0e228843c730-catalog-content\") pod \"community-operators-xsnbj\" (UID: \"f5977875-3b1a-404c-944d-0e228843c730\") " pod="openshift-marketplace/community-operators-xsnbj" Oct 08 17:25:24 crc kubenswrapper[4624]: I1008 17:25:24.174851 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5977875-3b1a-404c-944d-0e228843c730-catalog-content\") pod \"community-operators-xsnbj\" (UID: \"f5977875-3b1a-404c-944d-0e228843c730\") " pod="openshift-marketplace/community-operators-xsnbj" Oct 08 17:25:24 crc kubenswrapper[4624]: I1008 17:25:24.176496 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5977875-3b1a-404c-944d-0e228843c730-utilities\") pod \"community-operators-xsnbj\" (UID: \"f5977875-3b1a-404c-944d-0e228843c730\") " pod="openshift-marketplace/community-operators-xsnbj" Oct 08 17:25:24 crc kubenswrapper[4624]: I1008 17:25:24.203680 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp544\" (UniqueName: \"kubernetes.io/projected/f5977875-3b1a-404c-944d-0e228843c730-kube-api-access-cp544\") pod \"community-operators-xsnbj\" (UID: \"f5977875-3b1a-404c-944d-0e228843c730\") " pod="openshift-marketplace/community-operators-xsnbj" Oct 08 17:25:24 crc kubenswrapper[4624]: I1008 17:25:24.320502 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsnbj" Oct 08 17:25:26 crc kubenswrapper[4624]: I1008 17:25:26.039201 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xsnbj"] Oct 08 17:25:26 crc kubenswrapper[4624]: I1008 17:25:26.335757 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsnbj" event={"ID":"f5977875-3b1a-404c-944d-0e228843c730","Type":"ContainerStarted","Data":"8f067b642bfe69cc8b7d9a93e408dd5acf28ad8cc090322ceb49af85f4c5c77b"} Oct 08 17:25:26 crc kubenswrapper[4624]: I1008 17:25:26.336290 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsnbj" event={"ID":"f5977875-3b1a-404c-944d-0e228843c730","Type":"ContainerStarted","Data":"ec9887e65e94608f19035bdfa161c6a8e9c50e77a1905b38c8dbbf36e9209459"} Oct 08 17:25:27 crc kubenswrapper[4624]: I1008 17:25:27.347446 4624 generic.go:334] "Generic (PLEG): container finished" podID="f5977875-3b1a-404c-944d-0e228843c730" containerID="8f067b642bfe69cc8b7d9a93e408dd5acf28ad8cc090322ceb49af85f4c5c77b" exitCode=0 Oct 08 17:25:27 crc kubenswrapper[4624]: I1008 17:25:27.347540 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsnbj" event={"ID":"f5977875-3b1a-404c-944d-0e228843c730","Type":"ContainerDied","Data":"8f067b642bfe69cc8b7d9a93e408dd5acf28ad8cc090322ceb49af85f4c5c77b"} Oct 08 17:25:27 crc kubenswrapper[4624]: I1008 17:25:27.351492 4624 generic.go:334] "Generic (PLEG): container finished" podID="2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b" containerID="d3831331bdaf5b20e0d9e7c67a6459ddde5feb4930861907341b55b2da33d66e" exitCode=0 Oct 08 17:25:27 crc kubenswrapper[4624]: I1008 17:25:27.351539 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-28zx9/crc-debug-hsq28" event={"ID":"2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b","Type":"ContainerDied","Data":"d3831331bdaf5b20e0d9e7c67a6459ddde5feb4930861907341b55b2da33d66e"} Oct 08 17:25:28 crc kubenswrapper[4624]: I1008 17:25:28.505462 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-28zx9/crc-debug-hsq28" Oct 08 17:25:28 crc kubenswrapper[4624]: I1008 17:25:28.546111 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-28zx9/crc-debug-hsq28"] Oct 08 17:25:28 crc kubenswrapper[4624]: I1008 17:25:28.562403 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-28zx9/crc-debug-hsq28"] Oct 08 17:25:28 crc kubenswrapper[4624]: I1008 17:25:28.590942 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhgh2\" (UniqueName: \"kubernetes.io/projected/2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b-kube-api-access-fhgh2\") pod \"2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b\" (UID: \"2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b\") " Oct 08 17:25:28 crc kubenswrapper[4624]: I1008 17:25:28.591035 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b-host\") pod \"2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b\" (UID: \"2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b\") " Oct 08 17:25:28 crc kubenswrapper[4624]: I1008 17:25:28.602269 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b-host" (OuterVolumeSpecName: "host") pod "2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b" (UID: "2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 17:25:28 crc kubenswrapper[4624]: I1008 17:25:28.631905 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b-kube-api-access-fhgh2" (OuterVolumeSpecName: "kube-api-access-fhgh2") pod "2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b" (UID: "2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b"). InnerVolumeSpecName "kube-api-access-fhgh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:25:28 crc kubenswrapper[4624]: I1008 17:25:28.695934 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhgh2\" (UniqueName: \"kubernetes.io/projected/2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b-kube-api-access-fhgh2\") on node \"crc\" DevicePath \"\"" Oct 08 17:25:28 crc kubenswrapper[4624]: I1008 17:25:28.695973 4624 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b-host\") on node \"crc\" DevicePath \"\"" Oct 08 17:25:29 crc kubenswrapper[4624]: I1008 17:25:29.374389 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsnbj" event={"ID":"f5977875-3b1a-404c-944d-0e228843c730","Type":"ContainerStarted","Data":"0968f9de3bb00ffaab04a2ce7a34693503e35f14c736290cb4a4ed1bd4681c4e"} Oct 08 17:25:29 crc kubenswrapper[4624]: I1008 17:25:29.376140 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6e23cc6132f574ed12b8779327020dbadc7298581080434c8e52f39b9eeed4a" Oct 08 17:25:29 crc kubenswrapper[4624]: I1008 17:25:29.376208 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-28zx9/crc-debug-hsq28" Oct 08 17:25:29 crc kubenswrapper[4624]: I1008 17:25:29.477588 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b" path="/var/lib/kubelet/pods/2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b/volumes" Oct 08 17:25:30 crc kubenswrapper[4624]: I1008 17:25:30.919213 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-28zx9/crc-debug-g9tj4"] Oct 08 17:25:30 crc kubenswrapper[4624]: E1008 17:25:30.921460 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b" containerName="container-00" Oct 08 17:25:30 crc kubenswrapper[4624]: I1008 17:25:30.921581 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b" containerName="container-00" Oct 08 17:25:30 crc kubenswrapper[4624]: I1008 17:25:30.922003 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce582c8-0b36-4bcb-b523-c2d01c5a4b2b" containerName="container-00" Oct 08 17:25:30 crc kubenswrapper[4624]: I1008 17:25:30.923039 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-28zx9/crc-debug-g9tj4" Oct 08 17:25:30 crc kubenswrapper[4624]: I1008 17:25:30.939777 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxzdz\" (UniqueName: \"kubernetes.io/projected/96bd4138-5c50-43fa-9edb-beaa65b72620-kube-api-access-qxzdz\") pod \"crc-debug-g9tj4\" (UID: \"96bd4138-5c50-43fa-9edb-beaa65b72620\") " pod="openshift-must-gather-28zx9/crc-debug-g9tj4" Oct 08 17:25:30 crc kubenswrapper[4624]: I1008 17:25:30.940178 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96bd4138-5c50-43fa-9edb-beaa65b72620-host\") pod \"crc-debug-g9tj4\" (UID: \"96bd4138-5c50-43fa-9edb-beaa65b72620\") " pod="openshift-must-gather-28zx9/crc-debug-g9tj4" Oct 08 17:25:31 crc kubenswrapper[4624]: I1008 17:25:31.042936 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96bd4138-5c50-43fa-9edb-beaa65b72620-host\") pod \"crc-debug-g9tj4\" (UID: \"96bd4138-5c50-43fa-9edb-beaa65b72620\") " pod="openshift-must-gather-28zx9/crc-debug-g9tj4" Oct 08 17:25:31 crc kubenswrapper[4624]: I1008 17:25:31.043230 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxzdz\" (UniqueName: \"kubernetes.io/projected/96bd4138-5c50-43fa-9edb-beaa65b72620-kube-api-access-qxzdz\") pod \"crc-debug-g9tj4\" (UID: \"96bd4138-5c50-43fa-9edb-beaa65b72620\") " pod="openshift-must-gather-28zx9/crc-debug-g9tj4" Oct 08 17:25:31 crc kubenswrapper[4624]: I1008 17:25:31.067957 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96bd4138-5c50-43fa-9edb-beaa65b72620-host\") pod \"crc-debug-g9tj4\" (UID: \"96bd4138-5c50-43fa-9edb-beaa65b72620\") " pod="openshift-must-gather-28zx9/crc-debug-g9tj4" Oct 08 17:25:31 crc kubenswrapper[4624]: I1008 17:25:31.093410 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxzdz\" (UniqueName: \"kubernetes.io/projected/96bd4138-5c50-43fa-9edb-beaa65b72620-kube-api-access-qxzdz\") pod \"crc-debug-g9tj4\" (UID: \"96bd4138-5c50-43fa-9edb-beaa65b72620\") " pod="openshift-must-gather-28zx9/crc-debug-g9tj4" Oct 08 17:25:31 crc kubenswrapper[4624]: I1008 17:25:31.252083 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-28zx9/crc-debug-g9tj4" Oct 08 17:25:31 crc kubenswrapper[4624]: I1008 17:25:31.428023 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-28zx9/crc-debug-g9tj4" event={"ID":"96bd4138-5c50-43fa-9edb-beaa65b72620","Type":"ContainerStarted","Data":"e0075b8b2635184766981d150ea43803d6e1d1b08691d6c4abb43565cb7cc7ae"} Oct 08 17:25:32 crc kubenswrapper[4624]: I1008 17:25:32.438814 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-28zx9/crc-debug-g9tj4" event={"ID":"96bd4138-5c50-43fa-9edb-beaa65b72620","Type":"ContainerStarted","Data":"75e5d04359884ff89bd4674718726410b96cc7c69035787d1a4b88d35efe8eed"} Oct 08 17:25:32 crc kubenswrapper[4624]: I1008 17:25:32.459147 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-28zx9/crc-debug-g9tj4" podStartSLOduration=2.459120126 podStartE2EDuration="2.459120126s" podCreationTimestamp="2025-10-08 17:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 17:25:32.454223111 +0000 UTC m=+10957.605158198" watchObservedRunningTime="2025-10-08 17:25:32.459120126 +0000 UTC m=+10957.610055203" Oct 08 17:25:34 crc kubenswrapper[4624]: I1008 17:25:34.515017 4624 generic.go:334] "Generic (PLEG): container finished" podID="f5977875-3b1a-404c-944d-0e228843c730" containerID="0968f9de3bb00ffaab04a2ce7a34693503e35f14c736290cb4a4ed1bd4681c4e" exitCode=0 Oct 08 17:25:34 crc kubenswrapper[4624]: I1008 17:25:34.515341 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsnbj" event={"ID":"f5977875-3b1a-404c-944d-0e228843c730","Type":"ContainerDied","Data":"0968f9de3bb00ffaab04a2ce7a34693503e35f14c736290cb4a4ed1bd4681c4e"} Oct 08 17:25:35 crc kubenswrapper[4624]: I1008 17:25:35.532079 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsnbj" event={"ID":"f5977875-3b1a-404c-944d-0e228843c730","Type":"ContainerStarted","Data":"8fd4b6548c8db8b9c79fef25f216d1a16e0e8d9cc15791020305c90f712a489a"} Oct 08 17:25:36 crc kubenswrapper[4624]: I1008 17:25:36.557533 4624 generic.go:334] "Generic (PLEG): container finished" podID="96bd4138-5c50-43fa-9edb-beaa65b72620" containerID="75e5d04359884ff89bd4674718726410b96cc7c69035787d1a4b88d35efe8eed" exitCode=0 Oct 08 17:25:36 crc kubenswrapper[4624]: I1008 17:25:36.557632 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-28zx9/crc-debug-g9tj4" event={"ID":"96bd4138-5c50-43fa-9edb-beaa65b72620","Type":"ContainerDied","Data":"75e5d04359884ff89bd4674718726410b96cc7c69035787d1a4b88d35efe8eed"} Oct 08 17:25:36 crc kubenswrapper[4624]: I1008 17:25:36.585892 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xsnbj" podStartSLOduration=5.938644794 podStartE2EDuration="13.585865488s" podCreationTimestamp="2025-10-08 17:25:23 +0000 UTC" firstStartedPulling="2025-10-08 17:25:27.350170381 +0000 UTC m=+10952.501105458" lastFinishedPulling="2025-10-08 17:25:34.997391075 +0000 UTC m=+10960.148326152" observedRunningTime="2025-10-08 17:25:36.581875176 +0000 UTC m=+10961.732810253" watchObservedRunningTime="2025-10-08 17:25:36.585865488 +0000 UTC m=+10961.736800585" Oct 08 17:25:37 crc kubenswrapper[4624]: I1008 17:25:37.713698 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-28zx9/crc-debug-g9tj4" Oct 08 17:25:37 crc kubenswrapper[4624]: I1008 17:25:37.842224 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96bd4138-5c50-43fa-9edb-beaa65b72620-host\") pod \"96bd4138-5c50-43fa-9edb-beaa65b72620\" (UID: \"96bd4138-5c50-43fa-9edb-beaa65b72620\") " Oct 08 17:25:37 crc kubenswrapper[4624]: I1008 17:25:37.842377 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxzdz\" (UniqueName: \"kubernetes.io/projected/96bd4138-5c50-43fa-9edb-beaa65b72620-kube-api-access-qxzdz\") pod \"96bd4138-5c50-43fa-9edb-beaa65b72620\" (UID: \"96bd4138-5c50-43fa-9edb-beaa65b72620\") " Oct 08 17:25:37 crc kubenswrapper[4624]: I1008 17:25:37.842395 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96bd4138-5c50-43fa-9edb-beaa65b72620-host" (OuterVolumeSpecName: "host") pod "96bd4138-5c50-43fa-9edb-beaa65b72620" (UID: "96bd4138-5c50-43fa-9edb-beaa65b72620"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 17:25:37 crc kubenswrapper[4624]: I1008 17:25:37.842991 4624 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96bd4138-5c50-43fa-9edb-beaa65b72620-host\") on node \"crc\" DevicePath \"\"" Oct 08 17:25:37 crc kubenswrapper[4624]: I1008 17:25:37.857995 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96bd4138-5c50-43fa-9edb-beaa65b72620-kube-api-access-qxzdz" (OuterVolumeSpecName: "kube-api-access-qxzdz") pod "96bd4138-5c50-43fa-9edb-beaa65b72620" (UID: "96bd4138-5c50-43fa-9edb-beaa65b72620"). InnerVolumeSpecName "kube-api-access-qxzdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:25:37 crc kubenswrapper[4624]: I1008 17:25:37.946238 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxzdz\" (UniqueName: \"kubernetes.io/projected/96bd4138-5c50-43fa-9edb-beaa65b72620-kube-api-access-qxzdz\") on node \"crc\" DevicePath \"\"" Oct 08 17:25:38 crc kubenswrapper[4624]: I1008 17:25:38.579541 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-28zx9/crc-debug-g9tj4" event={"ID":"96bd4138-5c50-43fa-9edb-beaa65b72620","Type":"ContainerDied","Data":"e0075b8b2635184766981d150ea43803d6e1d1b08691d6c4abb43565cb7cc7ae"} Oct 08 17:25:38 crc kubenswrapper[4624]: I1008 17:25:38.580222 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0075b8b2635184766981d150ea43803d6e1d1b08691d6c4abb43565cb7cc7ae" Oct 08 17:25:38 crc kubenswrapper[4624]: I1008 17:25:38.580147 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-28zx9/crc-debug-g9tj4" Oct 08 17:25:42 crc kubenswrapper[4624]: I1008 17:25:42.029719 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-28zx9/crc-debug-g9tj4"] Oct 08 17:25:42 crc kubenswrapper[4624]: I1008 17:25:42.038613 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-28zx9/crc-debug-g9tj4"] Oct 08 17:25:43 crc kubenswrapper[4624]: I1008 17:25:43.357891 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-28zx9/crc-debug-hkpsf"] Oct 08 17:25:43 crc kubenswrapper[4624]: E1008 17:25:43.358563 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96bd4138-5c50-43fa-9edb-beaa65b72620" containerName="container-00" Oct 08 17:25:43 crc kubenswrapper[4624]: I1008 17:25:43.358576 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="96bd4138-5c50-43fa-9edb-beaa65b72620" containerName="container-00" Oct 08 17:25:43 crc kubenswrapper[4624]: I1008 17:25:43.358862 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="96bd4138-5c50-43fa-9edb-beaa65b72620" containerName="container-00" Oct 08 17:25:43 crc kubenswrapper[4624]: I1008 17:25:43.360850 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-28zx9/crc-debug-hkpsf" Oct 08 17:25:43 crc kubenswrapper[4624]: I1008 17:25:43.478140 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96bd4138-5c50-43fa-9edb-beaa65b72620" path="/var/lib/kubelet/pods/96bd4138-5c50-43fa-9edb-beaa65b72620/volumes" Oct 08 17:25:43 crc kubenswrapper[4624]: I1008 17:25:43.483574 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk9kr\" (UniqueName: \"kubernetes.io/projected/83ea49a2-c5bf-408e-bb6f-126b8b897214-kube-api-access-fk9kr\") pod \"crc-debug-hkpsf\" (UID: \"83ea49a2-c5bf-408e-bb6f-126b8b897214\") " pod="openshift-must-gather-28zx9/crc-debug-hkpsf" Oct 08 17:25:43 crc kubenswrapper[4624]: I1008 17:25:43.483871 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83ea49a2-c5bf-408e-bb6f-126b8b897214-host\") pod \"crc-debug-hkpsf\" (UID: \"83ea49a2-c5bf-408e-bb6f-126b8b897214\") " pod="openshift-must-gather-28zx9/crc-debug-hkpsf" Oct 08 17:25:43 crc kubenswrapper[4624]: I1008 17:25:43.586396 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83ea49a2-c5bf-408e-bb6f-126b8b897214-host\") pod \"crc-debug-hkpsf\" (UID: \"83ea49a2-c5bf-408e-bb6f-126b8b897214\") " pod="openshift-must-gather-28zx9/crc-debug-hkpsf" Oct 08 17:25:43 crc kubenswrapper[4624]: I1008 17:25:43.586830 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk9kr\" (UniqueName: \"kubernetes.io/projected/83ea49a2-c5bf-408e-bb6f-126b8b897214-kube-api-access-fk9kr\") pod \"crc-debug-hkpsf\" (UID: \"83ea49a2-c5bf-408e-bb6f-126b8b897214\") " pod="openshift-must-gather-28zx9/crc-debug-hkpsf" Oct 08 17:25:43 crc kubenswrapper[4624]: I1008 17:25:43.588784 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83ea49a2-c5bf-408e-bb6f-126b8b897214-host\") pod \"crc-debug-hkpsf\" (UID: \"83ea49a2-c5bf-408e-bb6f-126b8b897214\") " pod="openshift-must-gather-28zx9/crc-debug-hkpsf" Oct 08 17:25:43 crc kubenswrapper[4624]: I1008 17:25:43.621748 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk9kr\" (UniqueName: \"kubernetes.io/projected/83ea49a2-c5bf-408e-bb6f-126b8b897214-kube-api-access-fk9kr\") pod \"crc-debug-hkpsf\" (UID: \"83ea49a2-c5bf-408e-bb6f-126b8b897214\") " pod="openshift-must-gather-28zx9/crc-debug-hkpsf" Oct 08 17:25:43 crc kubenswrapper[4624]: I1008 17:25:43.684855 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-28zx9/crc-debug-hkpsf" Oct 08 17:25:44 crc kubenswrapper[4624]: I1008 17:25:44.321485 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xsnbj" Oct 08 17:25:44 crc kubenswrapper[4624]: I1008 17:25:44.321913 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xsnbj" Oct 08 17:25:44 crc kubenswrapper[4624]: I1008 17:25:44.652038 4624 generic.go:334] "Generic (PLEG): container finished" podID="83ea49a2-c5bf-408e-bb6f-126b8b897214" containerID="e55469393c005dff28b928e1758c68a63f59e0023b842254a27b18e591af3ec6" exitCode=0 Oct 08 17:25:44 crc kubenswrapper[4624]: I1008 17:25:44.652112 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-28zx9/crc-debug-hkpsf" event={"ID":"83ea49a2-c5bf-408e-bb6f-126b8b897214","Type":"ContainerDied","Data":"e55469393c005dff28b928e1758c68a63f59e0023b842254a27b18e591af3ec6"} Oct 08 17:25:44 crc kubenswrapper[4624]: I1008 17:25:44.652148 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-28zx9/crc-debug-hkpsf" event={"ID":"83ea49a2-c5bf-408e-bb6f-126b8b897214","Type":"ContainerStarted","Data":"34609937ce4cc723afa35909da8a69a2b4756e622ff5569a5bd76b096205bccf"} Oct 08 17:25:44 crc kubenswrapper[4624]: I1008 17:25:44.700032 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-28zx9/crc-debug-hkpsf"] Oct 08 17:25:44 crc kubenswrapper[4624]: I1008 17:25:44.708889 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-28zx9/crc-debug-hkpsf"] Oct 08 17:25:45 crc kubenswrapper[4624]: I1008 17:25:45.379893 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xsnbj" podUID="f5977875-3b1a-404c-944d-0e228843c730" containerName="registry-server" probeResult="failure" output=< Oct 08 17:25:45 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 17:25:45 crc kubenswrapper[4624]: > Oct 08 17:25:45 crc kubenswrapper[4624]: I1008 17:25:45.782480 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-28zx9/crc-debug-hkpsf" Oct 08 17:25:45 crc kubenswrapper[4624]: I1008 17:25:45.832209 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83ea49a2-c5bf-408e-bb6f-126b8b897214-host\") pod \"83ea49a2-c5bf-408e-bb6f-126b8b897214\" (UID: \"83ea49a2-c5bf-408e-bb6f-126b8b897214\") " Oct 08 17:25:45 crc kubenswrapper[4624]: I1008 17:25:45.832353 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk9kr\" (UniqueName: \"kubernetes.io/projected/83ea49a2-c5bf-408e-bb6f-126b8b897214-kube-api-access-fk9kr\") pod \"83ea49a2-c5bf-408e-bb6f-126b8b897214\" (UID: \"83ea49a2-c5bf-408e-bb6f-126b8b897214\") " Oct 08 17:25:45 crc kubenswrapper[4624]: I1008 17:25:45.832345 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83ea49a2-c5bf-408e-bb6f-126b8b897214-host" (OuterVolumeSpecName: "host") pod "83ea49a2-c5bf-408e-bb6f-126b8b897214" (UID: "83ea49a2-c5bf-408e-bb6f-126b8b897214"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 17:25:45 crc kubenswrapper[4624]: I1008 17:25:45.833167 4624 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83ea49a2-c5bf-408e-bb6f-126b8b897214-host\") on node \"crc\" DevicePath \"\"" Oct 08 17:25:45 crc kubenswrapper[4624]: I1008 17:25:45.841577 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ea49a2-c5bf-408e-bb6f-126b8b897214-kube-api-access-fk9kr" (OuterVolumeSpecName: "kube-api-access-fk9kr") pod "83ea49a2-c5bf-408e-bb6f-126b8b897214" (UID: "83ea49a2-c5bf-408e-bb6f-126b8b897214"). InnerVolumeSpecName "kube-api-access-fk9kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:25:45 crc kubenswrapper[4624]: I1008 17:25:45.935073 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk9kr\" (UniqueName: \"kubernetes.io/projected/83ea49a2-c5bf-408e-bb6f-126b8b897214-kube-api-access-fk9kr\") on node \"crc\" DevicePath \"\"" Oct 08 17:25:46 crc kubenswrapper[4624]: I1008 17:25:46.673909 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-28zx9/crc-debug-hkpsf" Oct 08 17:25:46 crc kubenswrapper[4624]: I1008 17:25:46.675094 4624 scope.go:117] "RemoveContainer" containerID="e55469393c005dff28b928e1758c68a63f59e0023b842254a27b18e591af3ec6" Oct 08 17:25:47 crc kubenswrapper[4624]: I1008 17:25:47.322995 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj_6a24c625-ff51-434c-9db2-3677640cbdef/util/0.log" Oct 08 17:25:47 crc kubenswrapper[4624]: I1008 17:25:47.478862 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ea49a2-c5bf-408e-bb6f-126b8b897214" path="/var/lib/kubelet/pods/83ea49a2-c5bf-408e-bb6f-126b8b897214/volumes" Oct 08 17:25:47 crc kubenswrapper[4624]: I1008 17:25:47.553289 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj_6a24c625-ff51-434c-9db2-3677640cbdef/util/0.log" Oct 08 17:25:47 crc kubenswrapper[4624]: I1008 17:25:47.626763 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj_6a24c625-ff51-434c-9db2-3677640cbdef/pull/0.log" Oct 08 17:25:47 crc kubenswrapper[4624]: I1008 17:25:47.628161 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj_6a24c625-ff51-434c-9db2-3677640cbdef/pull/0.log" Oct 08 17:25:47 crc kubenswrapper[4624]: I1008 17:25:47.826610 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj_6a24c625-ff51-434c-9db2-3677640cbdef/pull/0.log" Oct 08 17:25:47 crc kubenswrapper[4624]: I1008 17:25:47.834845 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj_6a24c625-ff51-434c-9db2-3677640cbdef/util/0.log" Oct 08 17:25:47 crc kubenswrapper[4624]: I1008 17:25:47.940654 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj_6a24c625-ff51-434c-9db2-3677640cbdef/extract/0.log" Oct 08 17:25:48 crc kubenswrapper[4624]: I1008 17:25:48.079625 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-qncx7_afff9e1e-6c7c-42b8-8099-6817f813ddb5/kube-rbac-proxy/0.log" Oct 08 17:25:48 crc kubenswrapper[4624]: I1008 17:25:48.188600 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-qncx7_afff9e1e-6c7c-42b8-8099-6817f813ddb5/manager/0.log" Oct 08 17:25:48 crc kubenswrapper[4624]: I1008 17:25:48.352265 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-rdwmx_7e4bdb15-7f2c-4a03-8882-00312974ef50/kube-rbac-proxy/0.log" Oct 08 17:25:48 crc kubenswrapper[4624]: I1008 17:25:48.449403 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-rdwmx_7e4bdb15-7f2c-4a03-8882-00312974ef50/manager/0.log" Oct 08 17:25:48 crc kubenswrapper[4624]: I1008 17:25:48.699414 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-ntt7z_c1f040ec-db8e-42de-8d6c-7758f2f45ecc/kube-rbac-proxy/0.log" Oct 08 17:25:48 crc kubenswrapper[4624]: I1008 17:25:48.721426 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-ntt7z_c1f040ec-db8e-42de-8d6c-7758f2f45ecc/manager/0.log" Oct 08 17:25:49 crc kubenswrapper[4624]: I1008 17:25:49.034964 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-bxlpf_d14acd1f-d497-4c71-8e2a-24c991118c01/kube-rbac-proxy/0.log" Oct 08 17:25:49 crc kubenswrapper[4624]: I1008 17:25:49.289203 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-zkxcm_0a8ab8f3-13b9-4f95-b540-ea49d2c5a261/kube-rbac-proxy/0.log" Oct 08 17:25:49 crc kubenswrapper[4624]: I1008 17:25:49.325424 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-bxlpf_d14acd1f-d497-4c71-8e2a-24c991118c01/manager/0.log" Oct 08 17:25:49 crc kubenswrapper[4624]: I1008 17:25:49.540244 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-zkxcm_0a8ab8f3-13b9-4f95-b540-ea49d2c5a261/manager/0.log" Oct 08 17:25:49 crc kubenswrapper[4624]: I1008 17:25:49.775776 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-5nntt_9dbf7483-c352-4fd8-b0e0-96acf41616b0/manager/0.log" Oct 08 17:25:49 crc kubenswrapper[4624]: I1008 17:25:49.776916 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-5nntt_9dbf7483-c352-4fd8-b0e0-96acf41616b0/kube-rbac-proxy/0.log" Oct 08 17:25:50 crc kubenswrapper[4624]: I1008 17:25:50.306779 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-fnwtr_d957903e-a551-41a6-8360-9af30306414f/kube-rbac-proxy/0.log" Oct 08 17:25:50 crc kubenswrapper[4624]: I1008 17:25:50.307201 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-bhvnd_6b00e056-2cc0-4eb2-85f9-8fc7197dc67a/kube-rbac-proxy/0.log" Oct 08 17:25:50 crc kubenswrapper[4624]: I1008 17:25:50.324767 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-bhvnd_6b00e056-2cc0-4eb2-85f9-8fc7197dc67a/manager/0.log" Oct 08 17:25:50 crc kubenswrapper[4624]: I1008 17:25:50.710168 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-fnwtr_d957903e-a551-41a6-8360-9af30306414f/manager/0.log" Oct 08 17:25:50 crc kubenswrapper[4624]: I1008 17:25:50.743942 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-zljbd_afba2503-8832-4a9f-8246-390f7ae79b71/manager/0.log" Oct 08 17:25:50 crc kubenswrapper[4624]: I1008 17:25:50.840565 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-zljbd_afba2503-8832-4a9f-8246-390f7ae79b71/kube-rbac-proxy/0.log" Oct 08 17:25:51 crc kubenswrapper[4624]: I1008 17:25:51.073336 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-q7qf9_c23e992e-ee8f-4d3a-9a3f-9f35a77f60fe/kube-rbac-proxy/0.log" Oct 08 17:25:51 crc kubenswrapper[4624]: I1008 17:25:51.276152 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-q7qf9_c23e992e-ee8f-4d3a-9a3f-9f35a77f60fe/manager/0.log" Oct 08 17:25:51 crc kubenswrapper[4624]: I1008 17:25:51.370965 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-fr9v7_5e88daf4-d403-4ba8-827f-a9972c5e40bf/kube-rbac-proxy/0.log" Oct 08 17:25:51 crc kubenswrapper[4624]: I1008 17:25:51.465020 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-fr9v7_5e88daf4-d403-4ba8-827f-a9972c5e40bf/manager/0.log" Oct 08 17:25:51 crc kubenswrapper[4624]: I1008 17:25:51.628671 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-tg58g_25e6b130-d820-475c-aae6-bed0dfbd0d0f/kube-rbac-proxy/0.log" Oct 08 17:25:51 crc kubenswrapper[4624]: I1008 17:25:51.729602 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-tg58g_25e6b130-d820-475c-aae6-bed0dfbd0d0f/manager/0.log" Oct 08 17:25:52 crc kubenswrapper[4624]: I1008 17:25:52.067948 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-8zzlr_84583328-8cef-49aa-b812-6f550d1dd71f/kube-rbac-proxy/0.log" Oct 08 17:25:52 crc kubenswrapper[4624]: I1008 17:25:52.226404 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-8zzlr_84583328-8cef-49aa-b812-6f550d1dd71f/manager/0.log" Oct 08 17:25:52 crc kubenswrapper[4624]: I1008 17:25:52.305785 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-hm59s_46bfb8aa-1ae5-43b4-88e9-5175655832aa/kube-rbac-proxy/0.log" Oct 08 17:25:52 crc kubenswrapper[4624]: I1008 17:25:52.435284 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-hm59s_46bfb8aa-1ae5-43b4-88e9-5175655832aa/manager/0.log" Oct 08 17:25:52 crc kubenswrapper[4624]: I1008 17:25:52.508389 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd_f4fff91a-a1f8-4e66-9955-005bfa78dfe6/kube-rbac-proxy/0.log" Oct 08 17:25:52 crc kubenswrapper[4624]: I1008 17:25:52.621068 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd_f4fff91a-a1f8-4e66-9955-005bfa78dfe6/manager/0.log" Oct 08 17:25:52 crc kubenswrapper[4624]: I1008 17:25:52.824989 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7984bdc97c-5nw99_3b136221-5f6b-451f-807a-5b66f856daa4/kube-rbac-proxy/0.log" Oct 08 17:25:53 crc kubenswrapper[4624]: I1008 17:25:53.035078 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-74fc58d4cc-drvtb_d7a5f232-4275-4277-8ec9-112eaadf6f4d/kube-rbac-proxy/0.log" Oct 08 17:25:53 crc kubenswrapper[4624]: I1008 17:25:53.349539 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-9vmx2_7c422ddf-650f-4f8d-828e-12a834b70bab/registry-server/0.log" Oct 08 17:25:53 crc kubenswrapper[4624]: I1008 17:25:53.369363 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-74fc58d4cc-drvtb_d7a5f232-4275-4277-8ec9-112eaadf6f4d/operator/0.log" Oct 08 17:25:53 crc kubenswrapper[4624]: I1008 17:25:53.565102 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f96f8c84-s6lmj_657a98dc-f421-4483-9354-28eeb59bb8a0/kube-rbac-proxy/0.log" Oct 08 17:25:53 crc kubenswrapper[4624]: I1008 17:25:53.792818 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-5bwth_2e68c8fe-365a-4d66-bbbd-0cac98993f72/kube-rbac-proxy/0.log" Oct 08 17:25:53 crc kubenswrapper[4624]: I1008 17:25:53.936853 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f96f8c84-s6lmj_657a98dc-f421-4483-9354-28eeb59bb8a0/manager/0.log" Oct 08 17:25:54 crc kubenswrapper[4624]: I1008 17:25:54.021018 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-5bwth_2e68c8fe-365a-4d66-bbbd-0cac98993f72/manager/0.log" Oct 08 17:25:54 crc kubenswrapper[4624]: I1008 17:25:54.063099 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7984bdc97c-5nw99_3b136221-5f6b-451f-807a-5b66f856daa4/manager/0.log" Oct 08 17:25:54 crc kubenswrapper[4624]: I1008 17:25:54.274277 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv_5b5f14f4-2722-46d2-9aa4-958caf004e89/operator/0.log" Oct 08 17:25:54 crc kubenswrapper[4624]: I1008 17:25:54.300516 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-vzbcj_4fed85c8-e5c1-40db-9799-64b1705f9d86/kube-rbac-proxy/0.log" Oct 08 17:25:54 crc kubenswrapper[4624]: I1008 17:25:54.364743 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-vzbcj_4fed85c8-e5c1-40db-9799-64b1705f9d86/manager/0.log" Oct 08 17:25:54 crc kubenswrapper[4624]: I1008 17:25:54.388459 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xsnbj" Oct 08 17:25:54 crc kubenswrapper[4624]: I1008 17:25:54.471554 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xsnbj" Oct 08 17:25:54 crc kubenswrapper[4624]: I1008 17:25:54.497844 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-775776c574-m7brx_2f4ed0ab-4cb6-4874-8c0d-cce4fdbfa7c9/kube-rbac-proxy/0.log" Oct 08 17:25:54 crc kubenswrapper[4624]: I1008 17:25:54.733512 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-74665f6cdc-wklj4_de7ff9ef-39f5-4521-856d-28c2665e7893/kube-rbac-proxy/0.log" Oct 08 17:25:54 crc kubenswrapper[4624]: I1008 17:25:54.762195 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-775776c574-m7brx_2f4ed0ab-4cb6-4874-8c0d-cce4fdbfa7c9/manager/0.log" Oct 08 17:25:54 crc kubenswrapper[4624]: I1008 17:25:54.798517 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-74665f6cdc-wklj4_de7ff9ef-39f5-4521-856d-28c2665e7893/manager/0.log" Oct 08 17:25:54 crc kubenswrapper[4624]: I1008 17:25:54.918991 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5dd4499c96-l4cbt_60b85252-6e34-43c5-a048-52fe105f2f93/kube-rbac-proxy/0.log" Oct 08 17:25:54 crc kubenswrapper[4624]: I1008 17:25:54.997266 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5dd4499c96-l4cbt_60b85252-6e34-43c5-a048-52fe105f2f93/manager/0.log" Oct 08 17:25:55 crc kubenswrapper[4624]: I1008 17:25:55.229849 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xsnbj"] Oct 08 17:25:55 crc kubenswrapper[4624]: I1008 17:25:55.767418 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xsnbj" podUID="f5977875-3b1a-404c-944d-0e228843c730" containerName="registry-server" containerID="cri-o://8fd4b6548c8db8b9c79fef25f216d1a16e0e8d9cc15791020305c90f712a489a" gracePeriod=2 Oct 08 17:25:56 crc kubenswrapper[4624]: I1008 17:25:56.786509 4624 generic.go:334] "Generic (PLEG): container finished" podID="f5977875-3b1a-404c-944d-0e228843c730" containerID="8fd4b6548c8db8b9c79fef25f216d1a16e0e8d9cc15791020305c90f712a489a" exitCode=0 Oct 08 17:25:56 crc kubenswrapper[4624]: I1008 17:25:56.786568 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsnbj" event={"ID":"f5977875-3b1a-404c-944d-0e228843c730","Type":"ContainerDied","Data":"8fd4b6548c8db8b9c79fef25f216d1a16e0e8d9cc15791020305c90f712a489a"} Oct 08 17:25:57 crc kubenswrapper[4624]: I1008 17:25:57.089917 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsnbj" Oct 08 17:25:57 crc kubenswrapper[4624]: I1008 17:25:57.201811 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp544\" (UniqueName: \"kubernetes.io/projected/f5977875-3b1a-404c-944d-0e228843c730-kube-api-access-cp544\") pod \"f5977875-3b1a-404c-944d-0e228843c730\" (UID: \"f5977875-3b1a-404c-944d-0e228843c730\") " Oct 08 17:25:57 crc kubenswrapper[4624]: I1008 17:25:57.201895 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5977875-3b1a-404c-944d-0e228843c730-utilities\") pod \"f5977875-3b1a-404c-944d-0e228843c730\" (UID: \"f5977875-3b1a-404c-944d-0e228843c730\") " Oct 08 17:25:57 crc kubenswrapper[4624]: I1008 17:25:57.201987 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5977875-3b1a-404c-944d-0e228843c730-catalog-content\") pod \"f5977875-3b1a-404c-944d-0e228843c730\" (UID: \"f5977875-3b1a-404c-944d-0e228843c730\") " Oct 08 17:25:57 crc kubenswrapper[4624]: I1008 17:25:57.205229 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5977875-3b1a-404c-944d-0e228843c730-utilities" (OuterVolumeSpecName: "utilities") pod "f5977875-3b1a-404c-944d-0e228843c730" (UID: "f5977875-3b1a-404c-944d-0e228843c730"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:25:57 crc kubenswrapper[4624]: I1008 17:25:57.215288 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5977875-3b1a-404c-944d-0e228843c730-kube-api-access-cp544" (OuterVolumeSpecName: "kube-api-access-cp544") pod "f5977875-3b1a-404c-944d-0e228843c730" (UID: "f5977875-3b1a-404c-944d-0e228843c730"). InnerVolumeSpecName "kube-api-access-cp544". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:25:57 crc kubenswrapper[4624]: I1008 17:25:57.264664 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5977875-3b1a-404c-944d-0e228843c730-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5977875-3b1a-404c-944d-0e228843c730" (UID: "f5977875-3b1a-404c-944d-0e228843c730"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:25:57 crc kubenswrapper[4624]: I1008 17:25:57.305060 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp544\" (UniqueName: \"kubernetes.io/projected/f5977875-3b1a-404c-944d-0e228843c730-kube-api-access-cp544\") on node \"crc\" DevicePath \"\"" Oct 08 17:25:57 crc kubenswrapper[4624]: I1008 17:25:57.305187 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5977875-3b1a-404c-944d-0e228843c730-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 17:25:57 crc kubenswrapper[4624]: I1008 17:25:57.305202 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5977875-3b1a-404c-944d-0e228843c730-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 17:25:57 crc kubenswrapper[4624]: I1008 17:25:57.799690 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsnbj" event={"ID":"f5977875-3b1a-404c-944d-0e228843c730","Type":"ContainerDied","Data":"ec9887e65e94608f19035bdfa161c6a8e9c50e77a1905b38c8dbbf36e9209459"} Oct 08 17:25:57 crc kubenswrapper[4624]: I1008 17:25:57.799973 4624 scope.go:117] "RemoveContainer" containerID="8fd4b6548c8db8b9c79fef25f216d1a16e0e8d9cc15791020305c90f712a489a" Oct 08 17:25:57 crc kubenswrapper[4624]: I1008 17:25:57.799772 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsnbj" Oct 08 17:25:57 crc kubenswrapper[4624]: I1008 17:25:57.822253 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xsnbj"] Oct 08 17:25:57 crc kubenswrapper[4624]: I1008 17:25:57.830447 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xsnbj"] Oct 08 17:25:57 crc kubenswrapper[4624]: I1008 17:25:57.839693 4624 scope.go:117] "RemoveContainer" containerID="0968f9de3bb00ffaab04a2ce7a34693503e35f14c736290cb4a4ed1bd4681c4e" Oct 08 17:25:57 crc kubenswrapper[4624]: I1008 17:25:57.870014 4624 scope.go:117] "RemoveContainer" containerID="8f067b642bfe69cc8b7d9a93e408dd5acf28ad8cc090322ceb49af85f4c5c77b" Oct 08 17:25:59 crc kubenswrapper[4624]: I1008 17:25:59.475824 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5977875-3b1a-404c-944d-0e228843c730" path="/var/lib/kubelet/pods/f5977875-3b1a-404c-944d-0e228843c730/volumes" Oct 08 17:26:14 crc kubenswrapper[4624]: I1008 17:26:14.600504 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hrd59_d3a80e27-d7fd-4b62-b5ae-9719c4f69655/control-plane-machine-set-operator/0.log" Oct 08 17:26:14 crc kubenswrapper[4624]: I1008 17:26:14.714559 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bbffn_100b758b-a285-49a0-a5ec-0b565dce5e1a/kube-rbac-proxy/0.log" Oct 08 17:26:14 crc kubenswrapper[4624]: I1008 17:26:14.797846 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bbffn_100b758b-a285-49a0-a5ec-0b565dce5e1a/machine-api-operator/0.log" Oct 08 17:26:28 crc kubenswrapper[4624]: I1008 17:26:28.899809 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-nh2ht_1f6ac2d9-a161-44b3-9e1d-8ba6eca7fa85/cert-manager-controller/0.log" Oct 08 17:26:29 crc kubenswrapper[4624]: I1008 17:26:29.035850 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-6b7n8_733c04ac-a5c1-44e6-8314-800c327491f9/cert-manager-cainjector/0.log" Oct 08 17:26:29 crc kubenswrapper[4624]: I1008 17:26:29.194282 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-pxvg9_973fdfa2-38af-4052-9fe8-d9657c1be807/cert-manager-webhook/0.log" Oct 08 17:26:30 crc kubenswrapper[4624]: I1008 17:26:30.076171 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:26:30 crc kubenswrapper[4624]: I1008 17:26:30.077509 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:26:43 crc kubenswrapper[4624]: I1008 17:26:43.497419 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-tb675_cc684d95-c43f-4d51-abca-fe8d1719d548/nmstate-console-plugin/0.log" Oct 08 17:26:43 crc kubenswrapper[4624]: I1008 17:26:43.626906 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qfgz8_e42689fb-9ba3-4ab8-8e3d-8ca52a091c49/nmstate-handler/0.log" Oct 08 17:26:43 crc kubenswrapper[4624]: I1008 17:26:43.776140 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-8bkj8_96e0b6b6-38d2-491f-ae9a-5be4860d1da0/kube-rbac-proxy/0.log" Oct 08 17:26:43 crc kubenswrapper[4624]: I1008 17:26:43.917147 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-8bkj8_96e0b6b6-38d2-491f-ae9a-5be4860d1da0/nmstate-metrics/0.log" Oct 08 17:26:44 crc kubenswrapper[4624]: I1008 17:26:44.023392 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-626pz_caa42ed9-a0c0-4e5b-836a-9bfcd09c439e/nmstate-operator/0.log" Oct 08 17:26:44 crc kubenswrapper[4624]: I1008 17:26:44.195630 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-jbwmj_d82b9420-16a1-4ddf-9239-7b649b9429d2/nmstate-webhook/0.log" Oct 08 17:27:00 crc kubenswrapper[4624]: I1008 17:27:00.026666 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-rm4d2_43a3eaca-a2c8-4508-996d-b6c977997ea7/kube-rbac-proxy/0.log" Oct 08 17:27:00 crc kubenswrapper[4624]: I1008 17:27:00.076566 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:27:00 crc kubenswrapper[4624]: I1008 17:27:00.076647 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:27:00 crc kubenswrapper[4624]: I1008 17:27:00.077786 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-rm4d2_43a3eaca-a2c8-4508-996d-b6c977997ea7/controller/0.log" Oct 08 17:27:00 crc kubenswrapper[4624]: I1008 17:27:00.295690 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-frr-files/0.log" Oct 08 17:27:00 crc kubenswrapper[4624]: I1008 17:27:00.478786 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-frr-files/0.log" Oct 08 17:27:00 crc kubenswrapper[4624]: I1008 17:27:00.491928 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-reloader/0.log" Oct 08 17:27:00 crc kubenswrapper[4624]: I1008 17:27:00.510996 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-metrics/0.log" Oct 08 17:27:00 crc kubenswrapper[4624]: I1008 17:27:00.516680 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-reloader/0.log" Oct 08 17:27:00 crc kubenswrapper[4624]: I1008 17:27:00.746916 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-frr-files/0.log" Oct 08 17:27:00 crc kubenswrapper[4624]: I1008 17:27:00.779493 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-metrics/0.log" Oct 08 17:27:00 crc kubenswrapper[4624]: I1008 17:27:00.811126 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-metrics/0.log" Oct 08 17:27:00 crc kubenswrapper[4624]: I1008 17:27:00.839362 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-reloader/0.log" Oct 08 17:27:01 crc kubenswrapper[4624]: I1008 17:27:01.030031 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-frr-files/0.log" Oct 08 17:27:01 crc kubenswrapper[4624]: I1008 17:27:01.039951 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-metrics/0.log" Oct 08 17:27:01 crc kubenswrapper[4624]: I1008 17:27:01.053836 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-reloader/0.log" Oct 08 17:27:01 crc kubenswrapper[4624]: I1008 17:27:01.058834 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/controller/0.log" Oct 08 17:27:01 crc kubenswrapper[4624]: I1008 17:27:01.258880 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/frr-metrics/0.log" Oct 08 17:27:01 crc kubenswrapper[4624]: I1008 17:27:01.310823 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/kube-rbac-proxy/0.log" Oct 08 17:27:01 crc kubenswrapper[4624]: I1008 17:27:01.359487 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/kube-rbac-proxy-frr/0.log" Oct 08 17:27:01 crc kubenswrapper[4624]: I1008 17:27:01.593678 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/reloader/0.log" Oct 08 17:27:01 crc kubenswrapper[4624]: I1008 17:27:01.655923 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-bf8cm_e36921e5-1d1e-420d-9dc1-f6aaad1bf904/frr-k8s-webhook-server/0.log" Oct 08 17:27:02 crc kubenswrapper[4624]: I1008 17:27:02.000034 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-bb6fb947b-d4j7c_38a79520-0ffe-4c51-8cba-117b74d7e7c8/manager/0.log" Oct 08 17:27:02 crc kubenswrapper[4624]: I1008 17:27:02.344945 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7df686497b-8bpg5_b8dd1cb9-7393-4033-abac-22f7aafa0235/webhook-server/0.log" Oct 08 17:27:02 crc kubenswrapper[4624]: I1008 17:27:02.454760 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vs5mh_25542429-7dd5-4d22-a273-38386ed868ac/kube-rbac-proxy/0.log" Oct 08 17:27:03 crc kubenswrapper[4624]: I1008 17:27:03.251677 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vs5mh_25542429-7dd5-4d22-a273-38386ed868ac/speaker/0.log" Oct 08 17:27:03 crc kubenswrapper[4624]: I1008 17:27:03.736573 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/frr/0.log" Oct 08 17:27:17 crc kubenswrapper[4624]: I1008 17:27:17.058314 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs_d35debc8-9835-42bc-833e-7412681e9a4d/util/0.log" Oct 08 17:27:17 crc kubenswrapper[4624]: I1008 17:27:17.226074 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs_d35debc8-9835-42bc-833e-7412681e9a4d/pull/0.log" Oct 08 17:27:17 crc kubenswrapper[4624]: I1008 17:27:17.311349 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs_d35debc8-9835-42bc-833e-7412681e9a4d/pull/0.log" Oct 08 17:27:17 crc kubenswrapper[4624]: I1008 17:27:17.367291 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs_d35debc8-9835-42bc-833e-7412681e9a4d/util/0.log" Oct 08 17:27:17 crc kubenswrapper[4624]: I1008 17:27:17.560325 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs_d35debc8-9835-42bc-833e-7412681e9a4d/pull/0.log" Oct 08 17:27:17 crc kubenswrapper[4624]: I1008 17:27:17.608304 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs_d35debc8-9835-42bc-833e-7412681e9a4d/util/0.log" Oct 08 17:27:17 crc kubenswrapper[4624]: I1008 17:27:17.626765 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs_d35debc8-9835-42bc-833e-7412681e9a4d/extract/0.log" Oct 08 17:27:17 crc kubenswrapper[4624]: I1008 17:27:17.811806 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5699q_cee06cd4-6d0b-4f4c-8734-b929607ec920/extract-utilities/0.log" Oct 08 17:27:17 crc kubenswrapper[4624]: I1008 17:27:17.987070 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5699q_cee06cd4-6d0b-4f4c-8734-b929607ec920/extract-utilities/0.log" Oct 08 17:27:17 crc kubenswrapper[4624]: I1008 17:27:17.993433 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5699q_cee06cd4-6d0b-4f4c-8734-b929607ec920/extract-content/0.log" Oct 08 17:27:18 crc kubenswrapper[4624]: I1008 17:27:18.055532 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5699q_cee06cd4-6d0b-4f4c-8734-b929607ec920/extract-content/0.log" Oct 08 17:27:18 crc kubenswrapper[4624]: I1008 17:27:18.364045 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5699q_cee06cd4-6d0b-4f4c-8734-b929607ec920/extract-utilities/0.log" Oct 08 17:27:18 crc kubenswrapper[4624]: I1008 17:27:18.409336 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5699q_cee06cd4-6d0b-4f4c-8734-b929607ec920/extract-content/0.log" Oct 08 17:27:18 crc kubenswrapper[4624]: I1008 17:27:18.527105 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5699q_cee06cd4-6d0b-4f4c-8734-b929607ec920/registry-server/0.log" Oct 08 17:27:18 crc kubenswrapper[4624]: I1008 17:27:18.710603 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp9b_d746ff3a-6adf-490a-a8fc-a2cbf477ff25/extract-utilities/0.log" Oct 08 17:27:19 crc kubenswrapper[4624]: I1008 17:27:19.054236 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp9b_d746ff3a-6adf-490a-a8fc-a2cbf477ff25/extract-content/0.log" Oct 08 17:27:19 crc kubenswrapper[4624]: I1008 17:27:19.259993 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp9b_d746ff3a-6adf-490a-a8fc-a2cbf477ff25/extract-utilities/0.log" Oct 08 17:27:19 crc kubenswrapper[4624]: I1008 17:27:19.265393 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp9b_d746ff3a-6adf-490a-a8fc-a2cbf477ff25/extract-content/0.log" Oct 08 17:27:19 crc kubenswrapper[4624]: I1008 17:27:19.507147 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp9b_d746ff3a-6adf-490a-a8fc-a2cbf477ff25/extract-content/0.log" Oct 08 17:27:19 crc kubenswrapper[4624]: I1008 17:27:19.611842 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp9b_d746ff3a-6adf-490a-a8fc-a2cbf477ff25/extract-utilities/0.log" Oct 08 17:27:19 crc kubenswrapper[4624]: I1008 17:27:19.860575 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb_7412e80b-7f98-4f46-8223-2389077e9175/util/0.log" Oct 08 17:27:20 crc kubenswrapper[4624]: I1008 17:27:20.201827 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb_7412e80b-7f98-4f46-8223-2389077e9175/pull/0.log" Oct 08 17:27:20 crc kubenswrapper[4624]: I1008 17:27:20.237496 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb_7412e80b-7f98-4f46-8223-2389077e9175/pull/0.log" Oct 08 17:27:20 crc kubenswrapper[4624]: I1008 17:27:20.321152 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb_7412e80b-7f98-4f46-8223-2389077e9175/util/0.log" Oct 08 17:27:20 crc kubenswrapper[4624]: I1008 17:27:20.488189 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb_7412e80b-7f98-4f46-8223-2389077e9175/util/0.log" Oct 08 17:27:20 crc kubenswrapper[4624]: I1008 17:27:20.603863 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb_7412e80b-7f98-4f46-8223-2389077e9175/pull/0.log" Oct 08 17:27:20 crc kubenswrapper[4624]: I1008 17:27:20.647900 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb_7412e80b-7f98-4f46-8223-2389077e9175/extract/0.log" Oct 08 17:27:21 crc kubenswrapper[4624]: I1008 17:27:20.979228 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cwbr6_348b94bb-ae57-4f52-8592-53abc49b97d0/marketplace-operator/0.log" Oct 08 17:27:21 crc kubenswrapper[4624]: I1008 17:27:21.160068 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp9b_d746ff3a-6adf-490a-a8fc-a2cbf477ff25/registry-server/0.log" Oct 08 17:27:21 crc kubenswrapper[4624]: I1008 17:27:21.280826 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ld57t_4b07dc5b-db1a-4f6b-be6c-660723b543b3/extract-utilities/0.log" Oct 08 17:27:21 crc kubenswrapper[4624]: I1008 17:27:21.642905 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ld57t_4b07dc5b-db1a-4f6b-be6c-660723b543b3/extract-utilities/0.log" Oct 08 17:27:21 crc kubenswrapper[4624]: I1008 17:27:21.644283 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ld57t_4b07dc5b-db1a-4f6b-be6c-660723b543b3/extract-content/0.log" Oct 08 17:27:21 crc kubenswrapper[4624]: I1008 17:27:21.662671 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ld57t_4b07dc5b-db1a-4f6b-be6c-660723b543b3/extract-content/0.log" Oct 08 17:27:21 crc kubenswrapper[4624]: I1008 17:27:21.977741 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ld57t_4b07dc5b-db1a-4f6b-be6c-660723b543b3/extract-content/0.log" Oct 08 17:27:21 crc kubenswrapper[4624]: I1008 17:27:21.994715 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ld57t_4b07dc5b-db1a-4f6b-be6c-660723b543b3/extract-utilities/0.log" Oct 08 17:27:22 crc kubenswrapper[4624]: I1008 17:27:22.159793 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7zcs_8c4b2af3-b2f0-4f10-9b2b-81a83483be29/extract-utilities/0.log" Oct 08 17:27:22 crc kubenswrapper[4624]: I1008 17:27:22.384500 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7zcs_8c4b2af3-b2f0-4f10-9b2b-81a83483be29/extract-content/0.log" Oct 08 17:27:22 crc kubenswrapper[4624]: I1008 17:27:22.399616 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ld57t_4b07dc5b-db1a-4f6b-be6c-660723b543b3/registry-server/0.log" Oct 08 17:27:22 crc kubenswrapper[4624]: I1008 17:27:22.434610 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7zcs_8c4b2af3-b2f0-4f10-9b2b-81a83483be29/extract-utilities/0.log" Oct 08 17:27:22 crc kubenswrapper[4624]: I1008 17:27:22.499089 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7zcs_8c4b2af3-b2f0-4f10-9b2b-81a83483be29/extract-content/0.log" Oct 08 17:27:22 crc kubenswrapper[4624]: I1008 17:27:22.757776 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7zcs_8c4b2af3-b2f0-4f10-9b2b-81a83483be29/extract-utilities/0.log" Oct 08 17:27:22 crc kubenswrapper[4624]: I1008 17:27:22.793490 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7zcs_8c4b2af3-b2f0-4f10-9b2b-81a83483be29/extract-content/0.log" Oct 08 17:27:23 crc kubenswrapper[4624]: I1008 17:27:23.983245 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7zcs_8c4b2af3-b2f0-4f10-9b2b-81a83483be29/registry-server/0.log" Oct 08 17:27:30 crc kubenswrapper[4624]: I1008 17:27:30.076086 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:27:30 crc kubenswrapper[4624]: I1008 17:27:30.076625 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:27:30 crc kubenswrapper[4624]: I1008 17:27:30.077778 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 17:27:30 crc kubenswrapper[4624]: I1008 17:27:30.079687 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cdc8e4c2153c0bad978a82ddbd7fd3a6946447cc4f4f7ca7442f9de17e890a9c"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 17:27:30 crc kubenswrapper[4624]: I1008 17:27:30.080338 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://cdc8e4c2153c0bad978a82ddbd7fd3a6946447cc4f4f7ca7442f9de17e890a9c" gracePeriod=600 Oct 08 17:27:30 crc kubenswrapper[4624]: I1008 17:27:30.824660 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="cdc8e4c2153c0bad978a82ddbd7fd3a6946447cc4f4f7ca7442f9de17e890a9c" exitCode=0 Oct 08 17:27:30 crc kubenswrapper[4624]: I1008 17:27:30.824727 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"cdc8e4c2153c0bad978a82ddbd7fd3a6946447cc4f4f7ca7442f9de17e890a9c"} Oct 08 17:27:30 crc kubenswrapper[4624]: I1008 17:27:30.826323 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109"} Oct 08 17:27:30 crc kubenswrapper[4624]: I1008 17:27:30.828759 4624 scope.go:117] "RemoveContainer" containerID="42e26894b8818bd39ad134619bb8da8375f27a6ba3d26348fa8be4de2e0620ee" Oct 08 17:28:48 crc kubenswrapper[4624]: I1008 17:28:48.982123 4624 scope.go:117] "RemoveContainer" containerID="d3831331bdaf5b20e0d9e7c67a6459ddde5feb4930861907341b55b2da33d66e" Oct 08 17:29:30 crc kubenswrapper[4624]: I1008 17:29:30.076789 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:29:30 crc kubenswrapper[4624]: I1008 17:29:30.079895 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:30:00 crc kubenswrapper[4624]: I1008 17:30:00.076982 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:30:00 crc kubenswrapper[4624]: I1008 17:30:00.080749 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:30:00 crc kubenswrapper[4624]: I1008 17:30:00.726943 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl"] Oct 08 17:30:00 crc kubenswrapper[4624]: E1008 17:30:00.747741 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ea49a2-c5bf-408e-bb6f-126b8b897214" containerName="container-00" Oct 08 17:30:00 crc kubenswrapper[4624]: I1008 17:30:00.747776 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ea49a2-c5bf-408e-bb6f-126b8b897214" containerName="container-00" Oct 08 17:30:00 crc kubenswrapper[4624]: E1008 17:30:00.747789 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5977875-3b1a-404c-944d-0e228843c730" containerName="extract-utilities" Oct 08 17:30:00 crc kubenswrapper[4624]: I1008 17:30:00.747801 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5977875-3b1a-404c-944d-0e228843c730" containerName="extract-utilities" Oct 08 17:30:00 crc kubenswrapper[4624]: E1008 17:30:00.747847 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5977875-3b1a-404c-944d-0e228843c730" containerName="extract-content" Oct 08 17:30:00 crc kubenswrapper[4624]: I1008 17:30:00.747855 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5977875-3b1a-404c-944d-0e228843c730" containerName="extract-content" Oct 08 17:30:00 crc kubenswrapper[4624]: E1008 17:30:00.747870 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5977875-3b1a-404c-944d-0e228843c730" containerName="registry-server" Oct 08 17:30:00 crc kubenswrapper[4624]: I1008 17:30:00.747876 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5977875-3b1a-404c-944d-0e228843c730" containerName="registry-server" Oct 08 17:30:00 crc kubenswrapper[4624]: I1008 17:30:00.801736 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ea49a2-c5bf-408e-bb6f-126b8b897214" containerName="container-00" Oct 08 17:30:00 crc kubenswrapper[4624]: I1008 17:30:00.802162 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5977875-3b1a-404c-944d-0e228843c730" containerName="registry-server" Oct 08 17:30:00 crc kubenswrapper[4624]: I1008 17:30:00.815081 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl"] Oct 08 17:30:00 crc kubenswrapper[4624]: I1008 17:30:00.815380 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl" Oct 08 17:30:00 crc kubenswrapper[4624]: I1008 17:30:00.850378 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 17:30:00 crc kubenswrapper[4624]: I1008 17:30:00.851781 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 17:30:00 crc kubenswrapper[4624]: I1008 17:30:00.951583 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a258593-7852-4437-b8d0-daac88ded225-config-volume\") pod \"collect-profiles-29332410-9fkrl\" (UID: \"2a258593-7852-4437-b8d0-daac88ded225\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl" Oct 08 17:30:00 crc kubenswrapper[4624]: I1008 17:30:00.951737 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a258593-7852-4437-b8d0-daac88ded225-secret-volume\") pod \"collect-profiles-29332410-9fkrl\" (UID: \"2a258593-7852-4437-b8d0-daac88ded225\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl" Oct 08 17:30:00 crc kubenswrapper[4624]: I1008 17:30:00.952020 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wbg4\" (UniqueName: \"kubernetes.io/projected/2a258593-7852-4437-b8d0-daac88ded225-kube-api-access-4wbg4\") pod \"collect-profiles-29332410-9fkrl\" (UID: \"2a258593-7852-4437-b8d0-daac88ded225\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl" Oct 08 17:30:01 crc kubenswrapper[4624]: I1008 17:30:01.054497 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wbg4\" (UniqueName: \"kubernetes.io/projected/2a258593-7852-4437-b8d0-daac88ded225-kube-api-access-4wbg4\") pod \"collect-profiles-29332410-9fkrl\" (UID: \"2a258593-7852-4437-b8d0-daac88ded225\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl" Oct 08 17:30:01 crc kubenswrapper[4624]: I1008 17:30:01.054743 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a258593-7852-4437-b8d0-daac88ded225-config-volume\") pod \"collect-profiles-29332410-9fkrl\" (UID: \"2a258593-7852-4437-b8d0-daac88ded225\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl" Oct 08 17:30:01 crc kubenswrapper[4624]: I1008 17:30:01.054781 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a258593-7852-4437-b8d0-daac88ded225-secret-volume\") pod \"collect-profiles-29332410-9fkrl\" (UID: \"2a258593-7852-4437-b8d0-daac88ded225\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl" Oct 08 17:30:01 crc kubenswrapper[4624]: I1008 17:30:01.072154 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a258593-7852-4437-b8d0-daac88ded225-config-volume\") pod \"collect-profiles-29332410-9fkrl\" (UID: \"2a258593-7852-4437-b8d0-daac88ded225\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl" Oct 08 17:30:01 crc kubenswrapper[4624]: I1008 17:30:01.082260 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a258593-7852-4437-b8d0-daac88ded225-secret-volume\") pod \"collect-profiles-29332410-9fkrl\" (UID: \"2a258593-7852-4437-b8d0-daac88ded225\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl" Oct 08 17:30:01 crc kubenswrapper[4624]: I1008 17:30:01.098694 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wbg4\" (UniqueName: \"kubernetes.io/projected/2a258593-7852-4437-b8d0-daac88ded225-kube-api-access-4wbg4\") pod \"collect-profiles-29332410-9fkrl\" (UID: \"2a258593-7852-4437-b8d0-daac88ded225\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl" Oct 08 17:30:01 crc kubenswrapper[4624]: I1008 17:30:01.191507 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl" Oct 08 17:30:03 crc kubenswrapper[4624]: I1008 17:30:03.352919 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl"] Oct 08 17:30:03 crc kubenswrapper[4624]: W1008 17:30:03.431914 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a258593_7852_4437_b8d0_daac88ded225.slice/crio-8ecf6edec168141e247c0674e25fe71e1904f1bead4d4ac784633a66cb6bed3a WatchSource:0}: Error finding container 8ecf6edec168141e247c0674e25fe71e1904f1bead4d4ac784633a66cb6bed3a: Status 404 returned error can't find the container with id 8ecf6edec168141e247c0674e25fe71e1904f1bead4d4ac784633a66cb6bed3a Oct 08 17:30:03 crc kubenswrapper[4624]: I1008 17:30:03.477944 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl" event={"ID":"2a258593-7852-4437-b8d0-daac88ded225","Type":"ContainerStarted","Data":"8ecf6edec168141e247c0674e25fe71e1904f1bead4d4ac784633a66cb6bed3a"} Oct 08 17:30:04 crc kubenswrapper[4624]: I1008 17:30:04.492774 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl" event={"ID":"2a258593-7852-4437-b8d0-daac88ded225","Type":"ContainerStarted","Data":"7d3f1db414df9345fee82a44d3180e46a73d61ff1bca31ddf8ec1c1c4ecfafc7"} Oct 08 17:30:04 crc kubenswrapper[4624]: I1008 17:30:04.547800 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl" podStartSLOduration=4.517423657 podStartE2EDuration="4.517423657s" podCreationTimestamp="2025-10-08 17:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 17:30:04.509063044 +0000 UTC m=+11229.659998131" watchObservedRunningTime="2025-10-08 17:30:04.517423657 +0000 UTC m=+11229.668358734" Oct 08 17:30:06 crc kubenswrapper[4624]: I1008 17:30:06.515799 4624 generic.go:334] "Generic (PLEG): container finished" podID="2a258593-7852-4437-b8d0-daac88ded225" containerID="7d3f1db414df9345fee82a44d3180e46a73d61ff1bca31ddf8ec1c1c4ecfafc7" exitCode=0 Oct 08 17:30:06 crc kubenswrapper[4624]: I1008 17:30:06.515918 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl" event={"ID":"2a258593-7852-4437-b8d0-daac88ded225","Type":"ContainerDied","Data":"7d3f1db414df9345fee82a44d3180e46a73d61ff1bca31ddf8ec1c1c4ecfafc7"} Oct 08 17:30:07 crc kubenswrapper[4624]: I1008 17:30:07.988776 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl" Oct 08 17:30:08 crc kubenswrapper[4624]: I1008 17:30:08.153757 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a258593-7852-4437-b8d0-daac88ded225-secret-volume\") pod \"2a258593-7852-4437-b8d0-daac88ded225\" (UID: \"2a258593-7852-4437-b8d0-daac88ded225\") " Oct 08 17:30:08 crc kubenswrapper[4624]: I1008 17:30:08.154135 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wbg4\" (UniqueName: \"kubernetes.io/projected/2a258593-7852-4437-b8d0-daac88ded225-kube-api-access-4wbg4\") pod \"2a258593-7852-4437-b8d0-daac88ded225\" (UID: \"2a258593-7852-4437-b8d0-daac88ded225\") " Oct 08 17:30:08 crc kubenswrapper[4624]: I1008 17:30:08.154216 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a258593-7852-4437-b8d0-daac88ded225-config-volume\") pod \"2a258593-7852-4437-b8d0-daac88ded225\" (UID: \"2a258593-7852-4437-b8d0-daac88ded225\") " Oct 08 17:30:08 crc kubenswrapper[4624]: I1008 17:30:08.173751 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a258593-7852-4437-b8d0-daac88ded225-config-volume" (OuterVolumeSpecName: "config-volume") pod "2a258593-7852-4437-b8d0-daac88ded225" (UID: "2a258593-7852-4437-b8d0-daac88ded225"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 17:30:08 crc kubenswrapper[4624]: I1008 17:30:08.188605 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a258593-7852-4437-b8d0-daac88ded225-kube-api-access-4wbg4" (OuterVolumeSpecName: "kube-api-access-4wbg4") pod "2a258593-7852-4437-b8d0-daac88ded225" (UID: "2a258593-7852-4437-b8d0-daac88ded225"). InnerVolumeSpecName "kube-api-access-4wbg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:30:08 crc kubenswrapper[4624]: I1008 17:30:08.189750 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a258593-7852-4437-b8d0-daac88ded225-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2a258593-7852-4437-b8d0-daac88ded225" (UID: "2a258593-7852-4437-b8d0-daac88ded225"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 17:30:08 crc kubenswrapper[4624]: I1008 17:30:08.257909 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a258593-7852-4437-b8d0-daac88ded225-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 17:30:08 crc kubenswrapper[4624]: I1008 17:30:08.257966 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wbg4\" (UniqueName: \"kubernetes.io/projected/2a258593-7852-4437-b8d0-daac88ded225-kube-api-access-4wbg4\") on node \"crc\" DevicePath \"\"" Oct 08 17:30:08 crc kubenswrapper[4624]: I1008 17:30:08.257986 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a258593-7852-4437-b8d0-daac88ded225-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 17:30:08 crc kubenswrapper[4624]: I1008 17:30:08.541819 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl" event={"ID":"2a258593-7852-4437-b8d0-daac88ded225","Type":"ContainerDied","Data":"8ecf6edec168141e247c0674e25fe71e1904f1bead4d4ac784633a66cb6bed3a"} Oct 08 17:30:08 crc kubenswrapper[4624]: I1008 17:30:08.541873 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ecf6edec168141e247c0674e25fe71e1904f1bead4d4ac784633a66cb6bed3a" Oct 08 17:30:08 crc kubenswrapper[4624]: I1008 17:30:08.541888 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29332410-9fkrl" Oct 08 17:30:09 crc kubenswrapper[4624]: I1008 17:30:09.134055 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc"] Oct 08 17:30:09 crc kubenswrapper[4624]: I1008 17:30:09.158389 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29332365-zthpc"] Oct 08 17:30:09 crc kubenswrapper[4624]: I1008 17:30:09.477910 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d275b8d-58e4-4b0a-a35f-2145c222a141" path="/var/lib/kubelet/pods/7d275b8d-58e4-4b0a-a35f-2145c222a141/volumes" Oct 08 17:30:30 crc kubenswrapper[4624]: I1008 17:30:30.077102 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:30:30 crc kubenswrapper[4624]: I1008 17:30:30.078819 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:30:30 crc kubenswrapper[4624]: I1008 17:30:30.078965 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 17:30:30 crc kubenswrapper[4624]: I1008 17:30:30.087983 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 17:30:30 crc kubenswrapper[4624]: I1008 17:30:30.088793 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" gracePeriod=600 Oct 08 17:30:30 crc kubenswrapper[4624]: E1008 17:30:30.726129 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:30:30 crc kubenswrapper[4624]: I1008 17:30:30.813427 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" exitCode=0 Oct 08 17:30:30 crc kubenswrapper[4624]: I1008 17:30:30.813504 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109"} Oct 08 17:30:30 crc kubenswrapper[4624]: I1008 17:30:30.813907 4624 scope.go:117] "RemoveContainer" containerID="cdc8e4c2153c0bad978a82ddbd7fd3a6946447cc4f4f7ca7442f9de17e890a9c" Oct 08 17:30:30 crc kubenswrapper[4624]: I1008 17:30:30.814820 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:30:30 crc kubenswrapper[4624]: E1008 17:30:30.815187 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:30:37 crc kubenswrapper[4624]: I1008 17:30:37.896395 4624 generic.go:334] "Generic (PLEG): container finished" podID="e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a" containerID="98aec9a3dd07ec358a5d19d3b3de1fc690fb791a5c98493de175fcfdd26fdf37" exitCode=0 Oct 08 17:30:37 crc kubenswrapper[4624]: I1008 17:30:37.896498 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-28zx9/must-gather-xcbd8" event={"ID":"e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a","Type":"ContainerDied","Data":"98aec9a3dd07ec358a5d19d3b3de1fc690fb791a5c98493de175fcfdd26fdf37"} Oct 08 17:30:37 crc kubenswrapper[4624]: I1008 17:30:37.897567 4624 scope.go:117] "RemoveContainer" containerID="98aec9a3dd07ec358a5d19d3b3de1fc690fb791a5c98493de175fcfdd26fdf37" Oct 08 17:30:38 crc kubenswrapper[4624]: I1008 17:30:38.103830 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-28zx9_must-gather-xcbd8_e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a/gather/0.log" Oct 08 17:30:46 crc kubenswrapper[4624]: I1008 17:30:46.466544 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:30:46 crc kubenswrapper[4624]: E1008 17:30:46.467350 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:30:47 crc kubenswrapper[4624]: I1008 17:30:47.378138 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z64d9"] Oct 08 17:30:47 crc kubenswrapper[4624]: E1008 17:30:47.378788 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a258593-7852-4437-b8d0-daac88ded225" containerName="collect-profiles" Oct 08 17:30:47 crc kubenswrapper[4624]: I1008 17:30:47.378812 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a258593-7852-4437-b8d0-daac88ded225" containerName="collect-profiles" Oct 08 17:30:47 crc kubenswrapper[4624]: I1008 17:30:47.379055 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a258593-7852-4437-b8d0-daac88ded225" containerName="collect-profiles" Oct 08 17:30:47 crc kubenswrapper[4624]: I1008 17:30:47.389366 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z64d9" Oct 08 17:30:47 crc kubenswrapper[4624]: I1008 17:30:47.398805 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z64d9"] Oct 08 17:30:47 crc kubenswrapper[4624]: I1008 17:30:47.511172 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e389ffea-86ab-409b-9382-927c4a6b6236-catalog-content\") pod \"redhat-operators-z64d9\" (UID: \"e389ffea-86ab-409b-9382-927c4a6b6236\") " pod="openshift-marketplace/redhat-operators-z64d9" Oct 08 17:30:47 crc kubenswrapper[4624]: I1008 17:30:47.511522 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxfm9\" (UniqueName: \"kubernetes.io/projected/e389ffea-86ab-409b-9382-927c4a6b6236-kube-api-access-cxfm9\") pod \"redhat-operators-z64d9\" (UID: \"e389ffea-86ab-409b-9382-927c4a6b6236\") " pod="openshift-marketplace/redhat-operators-z64d9" Oct 08 17:30:47 crc kubenswrapper[4624]: I1008 17:30:47.512387 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e389ffea-86ab-409b-9382-927c4a6b6236-utilities\") pod \"redhat-operators-z64d9\" (UID: \"e389ffea-86ab-409b-9382-927c4a6b6236\") " pod="openshift-marketplace/redhat-operators-z64d9" Oct 08 17:30:47 crc kubenswrapper[4624]: I1008 17:30:47.615427 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e389ffea-86ab-409b-9382-927c4a6b6236-catalog-content\") pod \"redhat-operators-z64d9\" (UID: \"e389ffea-86ab-409b-9382-927c4a6b6236\") " pod="openshift-marketplace/redhat-operators-z64d9" Oct 08 17:30:47 crc kubenswrapper[4624]: I1008 17:30:47.615484 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxfm9\" (UniqueName: \"kubernetes.io/projected/e389ffea-86ab-409b-9382-927c4a6b6236-kube-api-access-cxfm9\") pod \"redhat-operators-z64d9\" (UID: \"e389ffea-86ab-409b-9382-927c4a6b6236\") " pod="openshift-marketplace/redhat-operators-z64d9" Oct 08 17:30:47 crc kubenswrapper[4624]: I1008 17:30:47.615554 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e389ffea-86ab-409b-9382-927c4a6b6236-utilities\") pod \"redhat-operators-z64d9\" (UID: \"e389ffea-86ab-409b-9382-927c4a6b6236\") " pod="openshift-marketplace/redhat-operators-z64d9" Oct 08 17:30:47 crc kubenswrapper[4624]: I1008 17:30:47.616273 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e389ffea-86ab-409b-9382-927c4a6b6236-catalog-content\") pod \"redhat-operators-z64d9\" (UID: \"e389ffea-86ab-409b-9382-927c4a6b6236\") " pod="openshift-marketplace/redhat-operators-z64d9" Oct 08 17:30:47 crc kubenswrapper[4624]: I1008 17:30:47.616873 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e389ffea-86ab-409b-9382-927c4a6b6236-utilities\") pod \"redhat-operators-z64d9\" (UID: \"e389ffea-86ab-409b-9382-927c4a6b6236\") " pod="openshift-marketplace/redhat-operators-z64d9" Oct 08 17:30:47 crc kubenswrapper[4624]: I1008 17:30:47.643749 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxfm9\" (UniqueName: \"kubernetes.io/projected/e389ffea-86ab-409b-9382-927c4a6b6236-kube-api-access-cxfm9\") pod \"redhat-operators-z64d9\" (UID: \"e389ffea-86ab-409b-9382-927c4a6b6236\") " pod="openshift-marketplace/redhat-operators-z64d9" Oct 08 17:30:47 crc kubenswrapper[4624]: I1008 17:30:47.720465 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z64d9" Oct 08 17:30:48 crc kubenswrapper[4624]: I1008 17:30:48.264179 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z64d9"] Oct 08 17:30:49 crc kubenswrapper[4624]: I1008 17:30:49.011045 4624 generic.go:334] "Generic (PLEG): container finished" podID="e389ffea-86ab-409b-9382-927c4a6b6236" containerID="0a84fb3ce620ff92d008547aad853ae080d54ae2e0bd376394aa29b068405c92" exitCode=0 Oct 08 17:30:49 crc kubenswrapper[4624]: I1008 17:30:49.011250 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z64d9" event={"ID":"e389ffea-86ab-409b-9382-927c4a6b6236","Type":"ContainerDied","Data":"0a84fb3ce620ff92d008547aad853ae080d54ae2e0bd376394aa29b068405c92"} Oct 08 17:30:49 crc kubenswrapper[4624]: I1008 17:30:49.011411 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z64d9" event={"ID":"e389ffea-86ab-409b-9382-927c4a6b6236","Type":"ContainerStarted","Data":"12322eae7c674f2d6c3cb197907a8186e935c975871e76031e361527bf184e91"} Oct 08 17:30:49 crc kubenswrapper[4624]: I1008 17:30:49.025612 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 17:30:49 crc kubenswrapper[4624]: I1008 17:30:49.132926 4624 scope.go:117] "RemoveContainer" containerID="005e428b7a0ec765d316e01804ded238a5fda1de952d7ae5d0ab9578e93f1983" Oct 08 17:30:51 crc kubenswrapper[4624]: I1008 17:30:51.903186 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-28zx9/must-gather-xcbd8"] Oct 08 17:30:51 crc kubenswrapper[4624]: I1008 17:30:51.914368 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-28zx9/must-gather-xcbd8"] Oct 08 17:30:51 crc kubenswrapper[4624]: I1008 17:30:51.914839 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-28zx9/must-gather-xcbd8" podUID="e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a" containerName="copy" containerID="cri-o://400e3a028906a63e65a9968fcc26a571c6955c89305f223c2085f0049f6af394" gracePeriod=2 Oct 08 17:30:52 crc kubenswrapper[4624]: I1008 17:30:52.043907 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z64d9" event={"ID":"e389ffea-86ab-409b-9382-927c4a6b6236","Type":"ContainerStarted","Data":"1bf6adbc2d457e5652aa81386021ebb012c40acec282d84889bb710434ee091f"} Oct 08 17:30:52 crc kubenswrapper[4624]: I1008 17:30:52.473529 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-28zx9_must-gather-xcbd8_e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a/copy/0.log" Oct 08 17:30:52 crc kubenswrapper[4624]: I1008 17:30:52.475032 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-28zx9/must-gather-xcbd8" Oct 08 17:30:52 crc kubenswrapper[4624]: I1008 17:30:52.628412 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a-must-gather-output\") pod \"e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a\" (UID: \"e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a\") " Oct 08 17:30:52 crc kubenswrapper[4624]: I1008 17:30:52.628454 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbp99\" (UniqueName: \"kubernetes.io/projected/e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a-kube-api-access-tbp99\") pod \"e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a\" (UID: \"e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a\") " Oct 08 17:30:52 crc kubenswrapper[4624]: I1008 17:30:52.637429 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a-kube-api-access-tbp99" (OuterVolumeSpecName: "kube-api-access-tbp99") pod "e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a" (UID: "e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a"). InnerVolumeSpecName "kube-api-access-tbp99". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:30:52 crc kubenswrapper[4624]: I1008 17:30:52.765055 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbp99\" (UniqueName: \"kubernetes.io/projected/e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a-kube-api-access-tbp99\") on node \"crc\" DevicePath \"\"" Oct 08 17:30:53 crc kubenswrapper[4624]: I1008 17:30:53.067910 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-28zx9_must-gather-xcbd8_e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a/copy/0.log" Oct 08 17:30:53 crc kubenswrapper[4624]: I1008 17:30:53.068279 4624 generic.go:334] "Generic (PLEG): container finished" podID="e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a" containerID="400e3a028906a63e65a9968fcc26a571c6955c89305f223c2085f0049f6af394" exitCode=143 Oct 08 17:30:53 crc kubenswrapper[4624]: I1008 17:30:53.069123 4624 scope.go:117] "RemoveContainer" containerID="400e3a028906a63e65a9968fcc26a571c6955c89305f223c2085f0049f6af394" Oct 08 17:30:53 crc kubenswrapper[4624]: I1008 17:30:53.069129 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-28zx9/must-gather-xcbd8" Oct 08 17:30:53 crc kubenswrapper[4624]: I1008 17:30:53.104848 4624 scope.go:117] "RemoveContainer" containerID="98aec9a3dd07ec358a5d19d3b3de1fc690fb791a5c98493de175fcfdd26fdf37" Oct 08 17:30:53 crc kubenswrapper[4624]: I1008 17:30:53.118131 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a" (UID: "e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:30:53 crc kubenswrapper[4624]: I1008 17:30:53.172736 4624 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 08 17:30:53 crc kubenswrapper[4624]: I1008 17:30:53.216182 4624 scope.go:117] "RemoveContainer" containerID="400e3a028906a63e65a9968fcc26a571c6955c89305f223c2085f0049f6af394" Oct 08 17:30:53 crc kubenswrapper[4624]: E1008 17:30:53.217481 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"400e3a028906a63e65a9968fcc26a571c6955c89305f223c2085f0049f6af394\": container with ID starting with 400e3a028906a63e65a9968fcc26a571c6955c89305f223c2085f0049f6af394 not found: ID does not exist" containerID="400e3a028906a63e65a9968fcc26a571c6955c89305f223c2085f0049f6af394" Oct 08 17:30:53 crc kubenswrapper[4624]: I1008 17:30:53.217514 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"400e3a028906a63e65a9968fcc26a571c6955c89305f223c2085f0049f6af394"} err="failed to get container status \"400e3a028906a63e65a9968fcc26a571c6955c89305f223c2085f0049f6af394\": rpc error: code = NotFound desc = could not find container \"400e3a028906a63e65a9968fcc26a571c6955c89305f223c2085f0049f6af394\": container with ID starting with 400e3a028906a63e65a9968fcc26a571c6955c89305f223c2085f0049f6af394 not found: ID does not exist" Oct 08 17:30:53 crc kubenswrapper[4624]: I1008 17:30:53.217538 4624 scope.go:117] "RemoveContainer" containerID="98aec9a3dd07ec358a5d19d3b3de1fc690fb791a5c98493de175fcfdd26fdf37" Oct 08 17:30:53 crc kubenswrapper[4624]: E1008 17:30:53.219202 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98aec9a3dd07ec358a5d19d3b3de1fc690fb791a5c98493de175fcfdd26fdf37\": container with ID starting with 98aec9a3dd07ec358a5d19d3b3de1fc690fb791a5c98493de175fcfdd26fdf37 not found: ID does not exist" containerID="98aec9a3dd07ec358a5d19d3b3de1fc690fb791a5c98493de175fcfdd26fdf37" Oct 08 17:30:53 crc kubenswrapper[4624]: I1008 17:30:53.219246 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98aec9a3dd07ec358a5d19d3b3de1fc690fb791a5c98493de175fcfdd26fdf37"} err="failed to get container status \"98aec9a3dd07ec358a5d19d3b3de1fc690fb791a5c98493de175fcfdd26fdf37\": rpc error: code = NotFound desc = could not find container \"98aec9a3dd07ec358a5d19d3b3de1fc690fb791a5c98493de175fcfdd26fdf37\": container with ID starting with 98aec9a3dd07ec358a5d19d3b3de1fc690fb791a5c98493de175fcfdd26fdf37 not found: ID does not exist" Oct 08 17:30:53 crc kubenswrapper[4624]: I1008 17:30:53.477508 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a" path="/var/lib/kubelet/pods/e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a/volumes" Oct 08 17:30:59 crc kubenswrapper[4624]: I1008 17:30:59.466442 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:30:59 crc kubenswrapper[4624]: E1008 17:30:59.467339 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:31:01 crc kubenswrapper[4624]: I1008 17:31:01.156548 4624 generic.go:334] "Generic (PLEG): container finished" podID="e389ffea-86ab-409b-9382-927c4a6b6236" containerID="1bf6adbc2d457e5652aa81386021ebb012c40acec282d84889bb710434ee091f" exitCode=0 Oct 08 17:31:01 crc kubenswrapper[4624]: I1008 17:31:01.156595 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z64d9" event={"ID":"e389ffea-86ab-409b-9382-927c4a6b6236","Type":"ContainerDied","Data":"1bf6adbc2d457e5652aa81386021ebb012c40acec282d84889bb710434ee091f"} Oct 08 17:31:02 crc kubenswrapper[4624]: I1008 17:31:02.169571 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z64d9" event={"ID":"e389ffea-86ab-409b-9382-927c4a6b6236","Type":"ContainerStarted","Data":"1e70fb889b8a19fbab48fa07e0cb2415885295c043d135ace1f9465aedfd9c36"} Oct 08 17:31:02 crc kubenswrapper[4624]: I1008 17:31:02.195777 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z64d9" podStartSLOduration=2.573950052 podStartE2EDuration="15.195749522s" podCreationTimestamp="2025-10-08 17:30:47 +0000 UTC" firstStartedPulling="2025-10-08 17:30:49.013267795 +0000 UTC m=+11274.164202872" lastFinishedPulling="2025-10-08 17:31:01.635067265 +0000 UTC m=+11286.786002342" observedRunningTime="2025-10-08 17:31:02.189732218 +0000 UTC m=+11287.340667315" watchObservedRunningTime="2025-10-08 17:31:02.195749522 +0000 UTC m=+11287.346684599" Oct 08 17:31:07 crc kubenswrapper[4624]: I1008 17:31:07.721432 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z64d9" Oct 08 17:31:07 crc kubenswrapper[4624]: I1008 17:31:07.723241 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z64d9" Oct 08 17:31:08 crc kubenswrapper[4624]: I1008 17:31:08.779070 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z64d9" podUID="e389ffea-86ab-409b-9382-927c4a6b6236" containerName="registry-server" probeResult="failure" output=< Oct 08 17:31:08 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 17:31:08 crc kubenswrapper[4624]: > Oct 08 17:31:14 crc kubenswrapper[4624]: I1008 17:31:14.465294 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:31:14 crc kubenswrapper[4624]: E1008 17:31:14.466033 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:31:18 crc kubenswrapper[4624]: I1008 17:31:18.771687 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z64d9" podUID="e389ffea-86ab-409b-9382-927c4a6b6236" containerName="registry-server" probeResult="failure" output=< Oct 08 17:31:18 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 17:31:18 crc kubenswrapper[4624]: > Oct 08 17:31:25 crc kubenswrapper[4624]: I1008 17:31:25.474707 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:31:25 crc kubenswrapper[4624]: E1008 17:31:25.475536 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:31:28 crc kubenswrapper[4624]: I1008 17:31:28.790953 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z64d9" podUID="e389ffea-86ab-409b-9382-927c4a6b6236" containerName="registry-server" probeResult="failure" output=< Oct 08 17:31:28 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 17:31:28 crc kubenswrapper[4624]: > Oct 08 17:31:35 crc kubenswrapper[4624]: I1008 17:31:35.417987 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l49pj/must-gather-d5xhc"] Oct 08 17:31:35 crc kubenswrapper[4624]: E1008 17:31:35.419219 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a" containerName="gather" Oct 08 17:31:35 crc kubenswrapper[4624]: I1008 17:31:35.419243 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a" containerName="gather" Oct 08 17:31:35 crc kubenswrapper[4624]: E1008 17:31:35.419276 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a" containerName="copy" Oct 08 17:31:35 crc kubenswrapper[4624]: I1008 17:31:35.419288 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a" containerName="copy" Oct 08 17:31:35 crc kubenswrapper[4624]: I1008 17:31:35.419534 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a" containerName="copy" Oct 08 17:31:35 crc kubenswrapper[4624]: I1008 17:31:35.419572 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ffa1b8-c6b7-4ae3-abc6-ce64c3223e6a" containerName="gather" Oct 08 17:31:35 crc kubenswrapper[4624]: I1008 17:31:35.423098 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l49pj/must-gather-d5xhc" Oct 08 17:31:35 crc kubenswrapper[4624]: I1008 17:31:35.427690 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-l49pj"/"kube-root-ca.crt" Oct 08 17:31:35 crc kubenswrapper[4624]: I1008 17:31:35.430718 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-l49pj"/"openshift-service-ca.crt" Oct 08 17:31:35 crc kubenswrapper[4624]: I1008 17:31:35.447478 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-l49pj/must-gather-d5xhc"] Oct 08 17:31:35 crc kubenswrapper[4624]: I1008 17:31:35.555372 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf84b\" (UniqueName: \"kubernetes.io/projected/77627d4f-bfc2-4c1e-a2cf-f4e4dae418de-kube-api-access-pf84b\") pod \"must-gather-d5xhc\" (UID: \"77627d4f-bfc2-4c1e-a2cf-f4e4dae418de\") " pod="openshift-must-gather-l49pj/must-gather-d5xhc" Oct 08 17:31:35 crc kubenswrapper[4624]: I1008 17:31:35.555691 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/77627d4f-bfc2-4c1e-a2cf-f4e4dae418de-must-gather-output\") pod \"must-gather-d5xhc\" (UID: \"77627d4f-bfc2-4c1e-a2cf-f4e4dae418de\") " pod="openshift-must-gather-l49pj/must-gather-d5xhc" Oct 08 17:31:35 crc kubenswrapper[4624]: I1008 17:31:35.657189 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf84b\" (UniqueName: \"kubernetes.io/projected/77627d4f-bfc2-4c1e-a2cf-f4e4dae418de-kube-api-access-pf84b\") pod \"must-gather-d5xhc\" (UID: \"77627d4f-bfc2-4c1e-a2cf-f4e4dae418de\") " pod="openshift-must-gather-l49pj/must-gather-d5xhc" Oct 08 17:31:35 crc kubenswrapper[4624]: I1008 17:31:35.657247 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/77627d4f-bfc2-4c1e-a2cf-f4e4dae418de-must-gather-output\") pod \"must-gather-d5xhc\" (UID: \"77627d4f-bfc2-4c1e-a2cf-f4e4dae418de\") " pod="openshift-must-gather-l49pj/must-gather-d5xhc" Oct 08 17:31:35 crc kubenswrapper[4624]: I1008 17:31:35.658480 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/77627d4f-bfc2-4c1e-a2cf-f4e4dae418de-must-gather-output\") pod \"must-gather-d5xhc\" (UID: \"77627d4f-bfc2-4c1e-a2cf-f4e4dae418de\") " pod="openshift-must-gather-l49pj/must-gather-d5xhc" Oct 08 17:31:35 crc kubenswrapper[4624]: I1008 17:31:35.714880 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf84b\" (UniqueName: \"kubernetes.io/projected/77627d4f-bfc2-4c1e-a2cf-f4e4dae418de-kube-api-access-pf84b\") pod \"must-gather-d5xhc\" (UID: \"77627d4f-bfc2-4c1e-a2cf-f4e4dae418de\") " pod="openshift-must-gather-l49pj/must-gather-d5xhc" Oct 08 17:31:35 crc kubenswrapper[4624]: I1008 17:31:35.747464 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l49pj/must-gather-d5xhc" Oct 08 17:31:36 crc kubenswrapper[4624]: I1008 17:31:36.623433 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-l49pj/must-gather-d5xhc"] Oct 08 17:31:37 crc kubenswrapper[4624]: I1008 17:31:37.526598 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l49pj/must-gather-d5xhc" event={"ID":"77627d4f-bfc2-4c1e-a2cf-f4e4dae418de","Type":"ContainerStarted","Data":"9f03637936e0bc5c01c76d3886fd881662bfe7c67e4ed0a1c70425998548b74f"} Oct 08 17:31:37 crc kubenswrapper[4624]: I1008 17:31:37.527165 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l49pj/must-gather-d5xhc" event={"ID":"77627d4f-bfc2-4c1e-a2cf-f4e4dae418de","Type":"ContainerStarted","Data":"21bb2f7afe03dfdd4fbf87396e474580de2767eccd3c624902c0d072dbcccddb"} Oct 08 17:31:37 crc kubenswrapper[4624]: I1008 17:31:37.527195 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l49pj/must-gather-d5xhc" event={"ID":"77627d4f-bfc2-4c1e-a2cf-f4e4dae418de","Type":"ContainerStarted","Data":"73a140a07d8ccbe80d120116957d7689feb6951a0c226ac17b6eecdded7f26cf"} Oct 08 17:31:37 crc kubenswrapper[4624]: I1008 17:31:37.550805 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-l49pj/must-gather-d5xhc" podStartSLOduration=2.550784365 podStartE2EDuration="2.550784365s" podCreationTimestamp="2025-10-08 17:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 17:31:37.54591285 +0000 UTC m=+11322.696847957" watchObservedRunningTime="2025-10-08 17:31:37.550784365 +0000 UTC m=+11322.701719442" Oct 08 17:31:38 crc kubenswrapper[4624]: I1008 17:31:38.794380 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z64d9" podUID="e389ffea-86ab-409b-9382-927c4a6b6236" containerName="registry-server" probeResult="failure" output=< Oct 08 17:31:38 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 17:31:38 crc kubenswrapper[4624]: > Oct 08 17:31:39 crc kubenswrapper[4624]: I1008 17:31:39.466752 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:31:39 crc kubenswrapper[4624]: E1008 17:31:39.467388 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:31:43 crc kubenswrapper[4624]: E1008 17:31:43.277320 4624 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.154:38014->38.102.83.154:39627: write tcp 38.102.83.154:38014->38.102.83.154:39627: write: connection reset by peer Oct 08 17:31:44 crc kubenswrapper[4624]: I1008 17:31:44.276584 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l49pj/crc-debug-4wlqq"] Oct 08 17:31:44 crc kubenswrapper[4624]: I1008 17:31:44.280482 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l49pj/crc-debug-4wlqq" Oct 08 17:31:44 crc kubenswrapper[4624]: I1008 17:31:44.289807 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-l49pj"/"default-dockercfg-vm864" Oct 08 17:31:44 crc kubenswrapper[4624]: I1008 17:31:44.349262 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h476v\" (UniqueName: \"kubernetes.io/projected/d93b1a54-a6d2-435c-8cd2-dbc2125f7a89-kube-api-access-h476v\") pod \"crc-debug-4wlqq\" (UID: \"d93b1a54-a6d2-435c-8cd2-dbc2125f7a89\") " pod="openshift-must-gather-l49pj/crc-debug-4wlqq" Oct 08 17:31:44 crc kubenswrapper[4624]: I1008 17:31:44.349579 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d93b1a54-a6d2-435c-8cd2-dbc2125f7a89-host\") pod \"crc-debug-4wlqq\" (UID: \"d93b1a54-a6d2-435c-8cd2-dbc2125f7a89\") " pod="openshift-must-gather-l49pj/crc-debug-4wlqq" Oct 08 17:31:44 crc kubenswrapper[4624]: I1008 17:31:44.451389 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h476v\" (UniqueName: \"kubernetes.io/projected/d93b1a54-a6d2-435c-8cd2-dbc2125f7a89-kube-api-access-h476v\") pod \"crc-debug-4wlqq\" (UID: \"d93b1a54-a6d2-435c-8cd2-dbc2125f7a89\") " pod="openshift-must-gather-l49pj/crc-debug-4wlqq" Oct 08 17:31:44 crc kubenswrapper[4624]: I1008 17:31:44.451513 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d93b1a54-a6d2-435c-8cd2-dbc2125f7a89-host\") pod \"crc-debug-4wlqq\" (UID: \"d93b1a54-a6d2-435c-8cd2-dbc2125f7a89\") " pod="openshift-must-gather-l49pj/crc-debug-4wlqq" Oct 08 17:31:44 crc kubenswrapper[4624]: I1008 17:31:44.455235 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d93b1a54-a6d2-435c-8cd2-dbc2125f7a89-host\") pod \"crc-debug-4wlqq\" (UID: \"d93b1a54-a6d2-435c-8cd2-dbc2125f7a89\") " pod="openshift-must-gather-l49pj/crc-debug-4wlqq" Oct 08 17:31:44 crc kubenswrapper[4624]: I1008 17:31:44.498581 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h476v\" (UniqueName: \"kubernetes.io/projected/d93b1a54-a6d2-435c-8cd2-dbc2125f7a89-kube-api-access-h476v\") pod \"crc-debug-4wlqq\" (UID: \"d93b1a54-a6d2-435c-8cd2-dbc2125f7a89\") " pod="openshift-must-gather-l49pj/crc-debug-4wlqq" Oct 08 17:31:44 crc kubenswrapper[4624]: I1008 17:31:44.601445 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l49pj/crc-debug-4wlqq" Oct 08 17:31:45 crc kubenswrapper[4624]: I1008 17:31:45.620295 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l49pj/crc-debug-4wlqq" event={"ID":"d93b1a54-a6d2-435c-8cd2-dbc2125f7a89","Type":"ContainerStarted","Data":"9ddb6a9fd38bf8bfc94f0094f00dec9d8011e0f86521f43b0556085d15d118c9"} Oct 08 17:31:45 crc kubenswrapper[4624]: I1008 17:31:45.620823 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l49pj/crc-debug-4wlqq" event={"ID":"d93b1a54-a6d2-435c-8cd2-dbc2125f7a89","Type":"ContainerStarted","Data":"32c33bacd640fb939805845cd0dcf6cfffb40fbdfb10bc78234ddc2b2177ecfd"} Oct 08 17:31:45 crc kubenswrapper[4624]: I1008 17:31:45.649216 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-l49pj/crc-debug-4wlqq" podStartSLOduration=1.6491940010000001 podStartE2EDuration="1.649194001s" podCreationTimestamp="2025-10-08 17:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 17:31:45.63427466 +0000 UTC m=+11330.785209747" watchObservedRunningTime="2025-10-08 17:31:45.649194001 +0000 UTC m=+11330.800129068" Oct 08 17:31:47 crc kubenswrapper[4624]: I1008 17:31:47.798517 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z64d9" Oct 08 17:31:47 crc kubenswrapper[4624]: I1008 17:31:47.860180 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z64d9" Oct 08 17:31:48 crc kubenswrapper[4624]: I1008 17:31:48.623952 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z64d9"] Oct 08 17:31:49 crc kubenswrapper[4624]: I1008 17:31:49.244688 4624 scope.go:117] "RemoveContainer" containerID="75e5d04359884ff89bd4674718726410b96cc7c69035787d1a4b88d35efe8eed" Oct 08 17:31:49 crc kubenswrapper[4624]: I1008 17:31:49.670084 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z64d9" podUID="e389ffea-86ab-409b-9382-927c4a6b6236" containerName="registry-server" containerID="cri-o://1e70fb889b8a19fbab48fa07e0cb2415885295c043d135ace1f9465aedfd9c36" gracePeriod=2 Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.583191 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z64d9" Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.686152 4624 generic.go:334] "Generic (PLEG): container finished" podID="e389ffea-86ab-409b-9382-927c4a6b6236" containerID="1e70fb889b8a19fbab48fa07e0cb2415885295c043d135ace1f9465aedfd9c36" exitCode=0 Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.686489 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z64d9" event={"ID":"e389ffea-86ab-409b-9382-927c4a6b6236","Type":"ContainerDied","Data":"1e70fb889b8a19fbab48fa07e0cb2415885295c043d135ace1f9465aedfd9c36"} Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.686581 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z64d9" event={"ID":"e389ffea-86ab-409b-9382-927c4a6b6236","Type":"ContainerDied","Data":"12322eae7c674f2d6c3cb197907a8186e935c975871e76031e361527bf184e91"} Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.686618 4624 scope.go:117] "RemoveContainer" containerID="1e70fb889b8a19fbab48fa07e0cb2415885295c043d135ace1f9465aedfd9c36" Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.686626 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z64d9" Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.687686 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxfm9\" (UniqueName: \"kubernetes.io/projected/e389ffea-86ab-409b-9382-927c4a6b6236-kube-api-access-cxfm9\") pod \"e389ffea-86ab-409b-9382-927c4a6b6236\" (UID: \"e389ffea-86ab-409b-9382-927c4a6b6236\") " Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.687890 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e389ffea-86ab-409b-9382-927c4a6b6236-utilities\") pod \"e389ffea-86ab-409b-9382-927c4a6b6236\" (UID: \"e389ffea-86ab-409b-9382-927c4a6b6236\") " Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.688033 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e389ffea-86ab-409b-9382-927c4a6b6236-catalog-content\") pod \"e389ffea-86ab-409b-9382-927c4a6b6236\" (UID: \"e389ffea-86ab-409b-9382-927c4a6b6236\") " Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.690779 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e389ffea-86ab-409b-9382-927c4a6b6236-utilities" (OuterVolumeSpecName: "utilities") pod "e389ffea-86ab-409b-9382-927c4a6b6236" (UID: "e389ffea-86ab-409b-9382-927c4a6b6236"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.716178 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e389ffea-86ab-409b-9382-927c4a6b6236-kube-api-access-cxfm9" (OuterVolumeSpecName: "kube-api-access-cxfm9") pod "e389ffea-86ab-409b-9382-927c4a6b6236" (UID: "e389ffea-86ab-409b-9382-927c4a6b6236"). InnerVolumeSpecName "kube-api-access-cxfm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.741467 4624 scope.go:117] "RemoveContainer" containerID="1bf6adbc2d457e5652aa81386021ebb012c40acec282d84889bb710434ee091f" Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.786821 4624 scope.go:117] "RemoveContainer" containerID="0a84fb3ce620ff92d008547aad853ae080d54ae2e0bd376394aa29b068405c92" Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.791747 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e389ffea-86ab-409b-9382-927c4a6b6236-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.791780 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxfm9\" (UniqueName: \"kubernetes.io/projected/e389ffea-86ab-409b-9382-927c4a6b6236-kube-api-access-cxfm9\") on node \"crc\" DevicePath \"\"" Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.820513 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e389ffea-86ab-409b-9382-927c4a6b6236-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e389ffea-86ab-409b-9382-927c4a6b6236" (UID: "e389ffea-86ab-409b-9382-927c4a6b6236"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.833867 4624 scope.go:117] "RemoveContainer" containerID="1e70fb889b8a19fbab48fa07e0cb2415885295c043d135ace1f9465aedfd9c36" Oct 08 17:31:50 crc kubenswrapper[4624]: E1008 17:31:50.835661 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e70fb889b8a19fbab48fa07e0cb2415885295c043d135ace1f9465aedfd9c36\": container with ID starting with 1e70fb889b8a19fbab48fa07e0cb2415885295c043d135ace1f9465aedfd9c36 not found: ID does not exist" containerID="1e70fb889b8a19fbab48fa07e0cb2415885295c043d135ace1f9465aedfd9c36" Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.835705 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e70fb889b8a19fbab48fa07e0cb2415885295c043d135ace1f9465aedfd9c36"} err="failed to get container status \"1e70fb889b8a19fbab48fa07e0cb2415885295c043d135ace1f9465aedfd9c36\": rpc error: code = NotFound desc = could not find container \"1e70fb889b8a19fbab48fa07e0cb2415885295c043d135ace1f9465aedfd9c36\": container with ID starting with 1e70fb889b8a19fbab48fa07e0cb2415885295c043d135ace1f9465aedfd9c36 not found: ID does not exist" Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.835727 4624 scope.go:117] "RemoveContainer" containerID="1bf6adbc2d457e5652aa81386021ebb012c40acec282d84889bb710434ee091f" Oct 08 17:31:50 crc kubenswrapper[4624]: E1008 17:31:50.835990 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bf6adbc2d457e5652aa81386021ebb012c40acec282d84889bb710434ee091f\": container with ID starting with 1bf6adbc2d457e5652aa81386021ebb012c40acec282d84889bb710434ee091f not found: ID does not exist" containerID="1bf6adbc2d457e5652aa81386021ebb012c40acec282d84889bb710434ee091f" Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.836008 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bf6adbc2d457e5652aa81386021ebb012c40acec282d84889bb710434ee091f"} err="failed to get container status \"1bf6adbc2d457e5652aa81386021ebb012c40acec282d84889bb710434ee091f\": rpc error: code = NotFound desc = could not find container \"1bf6adbc2d457e5652aa81386021ebb012c40acec282d84889bb710434ee091f\": container with ID starting with 1bf6adbc2d457e5652aa81386021ebb012c40acec282d84889bb710434ee091f not found: ID does not exist" Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.836020 4624 scope.go:117] "RemoveContainer" containerID="0a84fb3ce620ff92d008547aad853ae080d54ae2e0bd376394aa29b068405c92" Oct 08 17:31:50 crc kubenswrapper[4624]: E1008 17:31:50.836395 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a84fb3ce620ff92d008547aad853ae080d54ae2e0bd376394aa29b068405c92\": container with ID starting with 0a84fb3ce620ff92d008547aad853ae080d54ae2e0bd376394aa29b068405c92 not found: ID does not exist" containerID="0a84fb3ce620ff92d008547aad853ae080d54ae2e0bd376394aa29b068405c92" Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.836423 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a84fb3ce620ff92d008547aad853ae080d54ae2e0bd376394aa29b068405c92"} err="failed to get container status \"0a84fb3ce620ff92d008547aad853ae080d54ae2e0bd376394aa29b068405c92\": rpc error: code = NotFound desc = could not find container \"0a84fb3ce620ff92d008547aad853ae080d54ae2e0bd376394aa29b068405c92\": container with ID starting with 0a84fb3ce620ff92d008547aad853ae080d54ae2e0bd376394aa29b068405c92 not found: ID does not exist" Oct 08 17:31:50 crc kubenswrapper[4624]: I1008 17:31:50.893796 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e389ffea-86ab-409b-9382-927c4a6b6236-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 17:31:51 crc kubenswrapper[4624]: I1008 17:31:51.023797 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z64d9"] Oct 08 17:31:51 crc kubenswrapper[4624]: I1008 17:31:51.032375 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z64d9"] Oct 08 17:31:51 crc kubenswrapper[4624]: I1008 17:31:51.478151 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e389ffea-86ab-409b-9382-927c4a6b6236" path="/var/lib/kubelet/pods/e389ffea-86ab-409b-9382-927c4a6b6236/volumes" Oct 08 17:31:52 crc kubenswrapper[4624]: I1008 17:31:52.465991 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:31:52 crc kubenswrapper[4624]: E1008 17:31:52.466313 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:32:00 crc kubenswrapper[4624]: I1008 17:32:00.508268 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q2nnn"] Oct 08 17:32:00 crc kubenswrapper[4624]: E1008 17:32:00.509404 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e389ffea-86ab-409b-9382-927c4a6b6236" containerName="extract-content" Oct 08 17:32:00 crc kubenswrapper[4624]: I1008 17:32:00.509422 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e389ffea-86ab-409b-9382-927c4a6b6236" containerName="extract-content" Oct 08 17:32:00 crc kubenswrapper[4624]: E1008 17:32:00.509498 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e389ffea-86ab-409b-9382-927c4a6b6236" containerName="extract-utilities" Oct 08 17:32:00 crc kubenswrapper[4624]: I1008 17:32:00.509509 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e389ffea-86ab-409b-9382-927c4a6b6236" containerName="extract-utilities" Oct 08 17:32:00 crc kubenswrapper[4624]: E1008 17:32:00.509538 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e389ffea-86ab-409b-9382-927c4a6b6236" containerName="registry-server" Oct 08 17:32:00 crc kubenswrapper[4624]: I1008 17:32:00.509546 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e389ffea-86ab-409b-9382-927c4a6b6236" containerName="registry-server" Oct 08 17:32:00 crc kubenswrapper[4624]: I1008 17:32:00.509813 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e389ffea-86ab-409b-9382-927c4a6b6236" containerName="registry-server" Oct 08 17:32:00 crc kubenswrapper[4624]: I1008 17:32:00.511885 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q2nnn" Oct 08 17:32:00 crc kubenswrapper[4624]: I1008 17:32:00.538798 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q2nnn"] Oct 08 17:32:00 crc kubenswrapper[4624]: I1008 17:32:00.596536 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2-utilities\") pod \"certified-operators-q2nnn\" (UID: \"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2\") " pod="openshift-marketplace/certified-operators-q2nnn" Oct 08 17:32:00 crc kubenswrapper[4624]: I1008 17:32:00.596609 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttbbz\" (UniqueName: \"kubernetes.io/projected/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2-kube-api-access-ttbbz\") pod \"certified-operators-q2nnn\" (UID: \"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2\") " pod="openshift-marketplace/certified-operators-q2nnn" Oct 08 17:32:00 crc kubenswrapper[4624]: I1008 17:32:00.596700 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2-catalog-content\") pod \"certified-operators-q2nnn\" (UID: \"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2\") " pod="openshift-marketplace/certified-operators-q2nnn" Oct 08 17:32:00 crc kubenswrapper[4624]: I1008 17:32:00.699145 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2-utilities\") pod \"certified-operators-q2nnn\" (UID: \"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2\") " pod="openshift-marketplace/certified-operators-q2nnn" Oct 08 17:32:00 crc kubenswrapper[4624]: I1008 17:32:00.699214 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttbbz\" (UniqueName: \"kubernetes.io/projected/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2-kube-api-access-ttbbz\") pod \"certified-operators-q2nnn\" (UID: \"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2\") " pod="openshift-marketplace/certified-operators-q2nnn" Oct 08 17:32:00 crc kubenswrapper[4624]: I1008 17:32:00.699289 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2-catalog-content\") pod \"certified-operators-q2nnn\" (UID: \"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2\") " pod="openshift-marketplace/certified-operators-q2nnn" Oct 08 17:32:00 crc kubenswrapper[4624]: I1008 17:32:00.700132 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2-catalog-content\") pod \"certified-operators-q2nnn\" (UID: \"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2\") " pod="openshift-marketplace/certified-operators-q2nnn" Oct 08 17:32:00 crc kubenswrapper[4624]: I1008 17:32:00.700132 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2-utilities\") pod \"certified-operators-q2nnn\" (UID: \"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2\") " pod="openshift-marketplace/certified-operators-q2nnn" Oct 08 17:32:00 crc kubenswrapper[4624]: I1008 17:32:00.761517 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttbbz\" (UniqueName: \"kubernetes.io/projected/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2-kube-api-access-ttbbz\") pod \"certified-operators-q2nnn\" (UID: \"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2\") " pod="openshift-marketplace/certified-operators-q2nnn" Oct 08 17:32:00 crc kubenswrapper[4624]: I1008 17:32:00.842024 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q2nnn" Oct 08 17:32:01 crc kubenswrapper[4624]: I1008 17:32:01.497688 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q2nnn"] Oct 08 17:32:01 crc kubenswrapper[4624]: I1008 17:32:01.812027 4624 generic.go:334] "Generic (PLEG): container finished" podID="1d60f5b4-6d9a-4d08-bd7b-b158af67edd2" containerID="8a23f49a9e99bbccfff2b3d96e3ae06825e02425247cc85b55a3c33790327522" exitCode=0 Oct 08 17:32:01 crc kubenswrapper[4624]: I1008 17:32:01.812073 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2nnn" event={"ID":"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2","Type":"ContainerDied","Data":"8a23f49a9e99bbccfff2b3d96e3ae06825e02425247cc85b55a3c33790327522"} Oct 08 17:32:01 crc kubenswrapper[4624]: I1008 17:32:01.812099 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2nnn" event={"ID":"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2","Type":"ContainerStarted","Data":"b58032820813534a7847e1fbca501db52a89efb173d31822126f11cb8c29a20e"} Oct 08 17:32:03 crc kubenswrapper[4624]: I1008 17:32:03.838871 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2nnn" event={"ID":"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2","Type":"ContainerStarted","Data":"f009125c3bbd0bbe46d4f891cfab14a52431bc67006f7081da0be9621dcb4b1b"} Oct 08 17:32:07 crc kubenswrapper[4624]: I1008 17:32:07.466064 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:32:07 crc kubenswrapper[4624]: E1008 17:32:07.466901 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:32:07 crc kubenswrapper[4624]: I1008 17:32:07.883264 4624 generic.go:334] "Generic (PLEG): container finished" podID="1d60f5b4-6d9a-4d08-bd7b-b158af67edd2" containerID="f009125c3bbd0bbe46d4f891cfab14a52431bc67006f7081da0be9621dcb4b1b" exitCode=0 Oct 08 17:32:07 crc kubenswrapper[4624]: I1008 17:32:07.883367 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2nnn" event={"ID":"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2","Type":"ContainerDied","Data":"f009125c3bbd0bbe46d4f891cfab14a52431bc67006f7081da0be9621dcb4b1b"} Oct 08 17:32:09 crc kubenswrapper[4624]: I1008 17:32:08.999746 4624 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.002683426s: [/var/lib/containers/storage/overlay/f75b9b36958cd714e9fc0b4f2e9f7a4b4d9a7e5be24e862504a1f27029a9b9f3/diff /var/log/pods/openstack_horizon-67f45f8444-g8bbs_378be2ad-3335-409f-b2eb-60b3997ed4f8/horizon/2.log]; will not log again for this container unless duration exceeds 2s Oct 08 17:32:09 crc kubenswrapper[4624]: I1008 17:32:09.907572 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2nnn" event={"ID":"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2","Type":"ContainerStarted","Data":"e63a010d60776facadc98a617589917804b492d0b30c91e23203f51983c409e2"} Oct 08 17:32:10 crc kubenswrapper[4624]: I1008 17:32:10.842721 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q2nnn" Oct 08 17:32:10 crc kubenswrapper[4624]: I1008 17:32:10.843016 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q2nnn" Oct 08 17:32:11 crc kubenswrapper[4624]: I1008 17:32:11.930462 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-q2nnn" podUID="1d60f5b4-6d9a-4d08-bd7b-b158af67edd2" containerName="registry-server" probeResult="failure" output=< Oct 08 17:32:11 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 17:32:11 crc kubenswrapper[4624]: > Oct 08 17:32:20 crc kubenswrapper[4624]: I1008 17:32:20.466208 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:32:20 crc kubenswrapper[4624]: E1008 17:32:20.467006 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:32:21 crc kubenswrapper[4624]: I1008 17:32:21.921182 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-q2nnn" podUID="1d60f5b4-6d9a-4d08-bd7b-b158af67edd2" containerName="registry-server" probeResult="failure" output=< Oct 08 17:32:21 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 17:32:21 crc kubenswrapper[4624]: > Oct 08 17:32:30 crc kubenswrapper[4624]: I1008 17:32:30.908371 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q2nnn" Oct 08 17:32:30 crc kubenswrapper[4624]: I1008 17:32:30.958474 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q2nnn" podStartSLOduration=24.072677273 podStartE2EDuration="30.953142461s" podCreationTimestamp="2025-10-08 17:32:00 +0000 UTC" firstStartedPulling="2025-10-08 17:32:01.814576603 +0000 UTC m=+11346.965511680" lastFinishedPulling="2025-10-08 17:32:08.695041791 +0000 UTC m=+11353.845976868" observedRunningTime="2025-10-08 17:32:09.935063773 +0000 UTC m=+11355.085998860" watchObservedRunningTime="2025-10-08 17:32:30.953142461 +0000 UTC m=+11376.104077538" Oct 08 17:32:30 crc kubenswrapper[4624]: I1008 17:32:30.987364 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q2nnn" Oct 08 17:32:31 crc kubenswrapper[4624]: I1008 17:32:31.707936 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q2nnn"] Oct 08 17:32:32 crc kubenswrapper[4624]: I1008 17:32:32.128271 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q2nnn" podUID="1d60f5b4-6d9a-4d08-bd7b-b158af67edd2" containerName="registry-server" containerID="cri-o://e63a010d60776facadc98a617589917804b492d0b30c91e23203f51983c409e2" gracePeriod=2 Oct 08 17:32:33 crc kubenswrapper[4624]: I1008 17:32:33.141044 4624 generic.go:334] "Generic (PLEG): container finished" podID="1d60f5b4-6d9a-4d08-bd7b-b158af67edd2" containerID="e63a010d60776facadc98a617589917804b492d0b30c91e23203f51983c409e2" exitCode=0 Oct 08 17:32:33 crc kubenswrapper[4624]: I1008 17:32:33.141358 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2nnn" event={"ID":"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2","Type":"ContainerDied","Data":"e63a010d60776facadc98a617589917804b492d0b30c91e23203f51983c409e2"} Oct 08 17:32:33 crc kubenswrapper[4624]: I1008 17:32:33.443284 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q2nnn" Oct 08 17:32:33 crc kubenswrapper[4624]: I1008 17:32:33.470711 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:32:33 crc kubenswrapper[4624]: E1008 17:32:33.471130 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:32:33 crc kubenswrapper[4624]: I1008 17:32:33.616360 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttbbz\" (UniqueName: \"kubernetes.io/projected/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2-kube-api-access-ttbbz\") pod \"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2\" (UID: \"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2\") " Oct 08 17:32:33 crc kubenswrapper[4624]: I1008 17:32:33.616423 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2-catalog-content\") pod \"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2\" (UID: \"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2\") " Oct 08 17:32:33 crc kubenswrapper[4624]: I1008 17:32:33.616561 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2-utilities\") pod \"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2\" (UID: \"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2\") " Oct 08 17:32:33 crc kubenswrapper[4624]: I1008 17:32:33.620315 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2-utilities" (OuterVolumeSpecName: "utilities") pod "1d60f5b4-6d9a-4d08-bd7b-b158af67edd2" (UID: "1d60f5b4-6d9a-4d08-bd7b-b158af67edd2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:32:33 crc kubenswrapper[4624]: I1008 17:32:33.631152 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2-kube-api-access-ttbbz" (OuterVolumeSpecName: "kube-api-access-ttbbz") pod "1d60f5b4-6d9a-4d08-bd7b-b158af67edd2" (UID: "1d60f5b4-6d9a-4d08-bd7b-b158af67edd2"). InnerVolumeSpecName "kube-api-access-ttbbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:32:33 crc kubenswrapper[4624]: I1008 17:32:33.686063 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d60f5b4-6d9a-4d08-bd7b-b158af67edd2" (UID: "1d60f5b4-6d9a-4d08-bd7b-b158af67edd2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:32:33 crc kubenswrapper[4624]: I1008 17:32:33.720501 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttbbz\" (UniqueName: \"kubernetes.io/projected/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2-kube-api-access-ttbbz\") on node \"crc\" DevicePath \"\"" Oct 08 17:32:33 crc kubenswrapper[4624]: I1008 17:32:33.720545 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 17:32:33 crc kubenswrapper[4624]: I1008 17:32:33.720558 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 17:32:34 crc kubenswrapper[4624]: I1008 17:32:34.194429 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2nnn" event={"ID":"1d60f5b4-6d9a-4d08-bd7b-b158af67edd2","Type":"ContainerDied","Data":"b58032820813534a7847e1fbca501db52a89efb173d31822126f11cb8c29a20e"} Oct 08 17:32:34 crc kubenswrapper[4624]: I1008 17:32:34.194485 4624 scope.go:117] "RemoveContainer" containerID="e63a010d60776facadc98a617589917804b492d0b30c91e23203f51983c409e2" Oct 08 17:32:34 crc kubenswrapper[4624]: I1008 17:32:34.194695 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q2nnn" Oct 08 17:32:34 crc kubenswrapper[4624]: I1008 17:32:34.257078 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q2nnn"] Oct 08 17:32:34 crc kubenswrapper[4624]: I1008 17:32:34.261908 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q2nnn"] Oct 08 17:32:34 crc kubenswrapper[4624]: I1008 17:32:34.289027 4624 scope.go:117] "RemoveContainer" containerID="f009125c3bbd0bbe46d4f891cfab14a52431bc67006f7081da0be9621dcb4b1b" Oct 08 17:32:34 crc kubenswrapper[4624]: I1008 17:32:34.321346 4624 scope.go:117] "RemoveContainer" containerID="8a23f49a9e99bbccfff2b3d96e3ae06825e02425247cc85b55a3c33790327522" Oct 08 17:32:35 crc kubenswrapper[4624]: I1008 17:32:35.482743 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d60f5b4-6d9a-4d08-bd7b-b158af67edd2" path="/var/lib/kubelet/pods/1d60f5b4-6d9a-4d08-bd7b-b158af67edd2/volumes" Oct 08 17:32:45 crc kubenswrapper[4624]: I1008 17:32:45.485726 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:32:45 crc kubenswrapper[4624]: E1008 17:32:45.486342 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:33:00 crc kubenswrapper[4624]: I1008 17:33:00.465838 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:33:00 crc kubenswrapper[4624]: E1008 17:33:00.466654 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:33:15 crc kubenswrapper[4624]: I1008 17:33:15.268003 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-66c67d8-rjnw7_12adc423-1b55-4b56-85a3-32e2aabbc82d/barbican-api/0.log" Oct 08 17:33:15 crc kubenswrapper[4624]: I1008 17:33:15.269938 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-66c67d8-rjnw7_12adc423-1b55-4b56-85a3-32e2aabbc82d/barbican-api-log/0.log" Oct 08 17:33:15 crc kubenswrapper[4624]: I1008 17:33:15.477460 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:33:15 crc kubenswrapper[4624]: E1008 17:33:15.477732 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:33:15 crc kubenswrapper[4624]: I1008 17:33:15.599551 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-db88bb74-6vl6r_255b203d-1921-40ab-8c4f-7f582a647651/barbican-keystone-listener/0.log" Oct 08 17:33:15 crc kubenswrapper[4624]: I1008 17:33:15.674009 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-db88bb74-6vl6r_255b203d-1921-40ab-8c4f-7f582a647651/barbican-keystone-listener-log/0.log" Oct 08 17:33:15 crc kubenswrapper[4624]: I1008 17:33:15.907431 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-769b9cd88f-r425v_2051dd96-3f4a-42b1-9802-602bd9693aec/barbican-worker/0.log" Oct 08 17:33:16 crc kubenswrapper[4624]: I1008 17:33:16.062699 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-769b9cd88f-r425v_2051dd96-3f4a-42b1-9802-602bd9693aec/barbican-worker-log/0.log" Oct 08 17:33:16 crc kubenswrapper[4624]: I1008 17:33:16.330743 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-x4jlf_f0b620ad-209a-49a8-90cd-f4780a2565a3/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:33:16 crc kubenswrapper[4624]: I1008 17:33:16.642145 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5bfc80be-dbc0-42c4-b493-3b5747d4ccb8/ceilometer-central-agent/0.log" Oct 08 17:33:16 crc kubenswrapper[4624]: I1008 17:33:16.694367 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5bfc80be-dbc0-42c4-b493-3b5747d4ccb8/ceilometer-notification-agent/0.log" Oct 08 17:33:16 crc kubenswrapper[4624]: I1008 17:33:16.748460 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5bfc80be-dbc0-42c4-b493-3b5747d4ccb8/proxy-httpd/0.log" Oct 08 17:33:16 crc kubenswrapper[4624]: I1008 17:33:16.935053 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5bfc80be-dbc0-42c4-b493-3b5747d4ccb8/sg-core/0.log" Oct 08 17:33:17 crc kubenswrapper[4624]: I1008 17:33:17.185721 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5d6f1635-4c52-4761-a2c7-38951659c26e/cinder-api/0.log" Oct 08 17:33:17 crc kubenswrapper[4624]: I1008 17:33:17.256972 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5d6f1635-4c52-4761-a2c7-38951659c26e/cinder-api-log/0.log" Oct 08 17:33:17 crc kubenswrapper[4624]: I1008 17:33:17.528465 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5a500ae8-578e-4045-8bfb-0a658340dc09/cinder-scheduler/0.log" Oct 08 17:33:17 crc kubenswrapper[4624]: I1008 17:33:17.703816 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5a500ae8-578e-4045-8bfb-0a658340dc09/probe/0.log" Oct 08 17:33:17 crc kubenswrapper[4624]: I1008 17:33:17.908710 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-dx8jl_968537d8-1190-479e-a4cc-92054923d08a/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:33:18 crc kubenswrapper[4624]: I1008 17:33:18.396649 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2mkgp_b0fe7d62-81e4-4163-bc0f-c3cf6c2bcd64/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:33:18 crc kubenswrapper[4624]: I1008 17:33:18.729422 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-f47x7_5f5a0d83-3c47-439d-9d82-12e0c8afdf45/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:33:18 crc kubenswrapper[4624]: I1008 17:33:18.813623 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-54695ff68c-h4xsn_ca4fad93-41d0-4e9a-8033-fa3ff1a14769/init/0.log" Oct 08 17:33:19 crc kubenswrapper[4624]: I1008 17:33:19.134694 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-54695ff68c-h4xsn_ca4fad93-41d0-4e9a-8033-fa3ff1a14769/init/0.log" Oct 08 17:33:19 crc kubenswrapper[4624]: I1008 17:33:19.424000 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-z8msh_b35b91af-b986-47f5-a444-bc20763e34ed/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:33:19 crc kubenswrapper[4624]: I1008 17:33:19.444876 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-54695ff68c-h4xsn_ca4fad93-41d0-4e9a-8033-fa3ff1a14769/dnsmasq-dns/0.log" Oct 08 17:33:19 crc kubenswrapper[4624]: I1008 17:33:19.615601 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5719b9ea-1496-4097-b86f-39e516f37a0d/glance-httpd/0.log" Oct 08 17:33:19 crc kubenswrapper[4624]: I1008 17:33:19.717095 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5719b9ea-1496-4097-b86f-39e516f37a0d/glance-log/0.log" Oct 08 17:33:19 crc kubenswrapper[4624]: I1008 17:33:19.813294 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1/glance-httpd/0.log" Oct 08 17:33:19 crc kubenswrapper[4624]: I1008 17:33:19.863331 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_36d6b27d-ff4a-43cf-b4a9-4520f2aeb0a1/glance-log/0.log" Oct 08 17:33:20 crc kubenswrapper[4624]: I1008 17:33:20.531793 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7674ff4df6-6crwz_93cce62f-6d52-4afd-aa59-e2adac63a30f/heat-engine/0.log" Oct 08 17:33:21 crc kubenswrapper[4624]: I1008 17:33:21.090913 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-67f45f8444-g8bbs_378be2ad-3335-409f-b2eb-60b3997ed4f8/horizon/2.log" Oct 08 17:33:21 crc kubenswrapper[4624]: I1008 17:33:21.265556 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-67f45f8444-g8bbs_378be2ad-3335-409f-b2eb-60b3997ed4f8/horizon/1.log" Oct 08 17:33:21 crc kubenswrapper[4624]: I1008 17:33:21.826547 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-jvs2p_35e4f3c9-b784-4780-bf62-c44be287ffef/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:33:22 crc kubenswrapper[4624]: I1008 17:33:22.314766 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-79db6c47d5-q6dxb_a8494534-9935-4f78-9571-b03ff870b8ac/heat-api/0.log" Oct 08 17:33:22 crc kubenswrapper[4624]: I1008 17:33:22.487022 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-c8c76b4d4-k9vfh_bcf01908-e783-4491-8047-ef1053a2b87b/heat-cfnapi/0.log" Oct 08 17:33:22 crc kubenswrapper[4624]: I1008 17:33:22.683068 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-fmk57_43155da2-e389-481e-8e9d-8c219482ba50/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:33:23 crc kubenswrapper[4624]: I1008 17:33:23.000336 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29332261-glg7h_f247107e-e524-46ed-891c-4ceae5377acd/keystone-cron/0.log" Oct 08 17:33:23 crc kubenswrapper[4624]: I1008 17:33:23.425923 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29332321-7k55z_826b0aa9-5c21-4e34-ac07-66eb07b77464/keystone-cron/0.log" Oct 08 17:33:23 crc kubenswrapper[4624]: I1008 17:33:23.522863 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-67f45f8444-g8bbs_378be2ad-3335-409f-b2eb-60b3997ed4f8/horizon-log/0.log" Oct 08 17:33:23 crc kubenswrapper[4624]: I1008 17:33:23.743367 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29332381-bs9mx_b977380c-013b-4784-81b2-1387f688506c/keystone-cron/0.log" Oct 08 17:33:24 crc kubenswrapper[4624]: I1008 17:33:24.084936 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ee24950a-af9c-4e5f-ab36-66c3c5a9cf66/kube-state-metrics/0.log" Oct 08 17:33:24 crc kubenswrapper[4624]: I1008 17:33:24.254992 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-mtv2v_43842ce6-3b52-41bc-ab12-56e722de00d1/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:33:24 crc kubenswrapper[4624]: I1008 17:33:24.279750 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-65f89d8d74-ng4cv_8c703134-b38a-414e-8c09-5702aa32a638/keystone-api/0.log" Oct 08 17:33:24 crc kubenswrapper[4624]: I1008 17:33:24.778260 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-857d49cc6c-6fh82_ba09f7ef-9520-4917-9fb7-642e8fb51be1/neutron-httpd/0.log" Oct 08 17:33:25 crc kubenswrapper[4624]: I1008 17:33:25.061396 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-d4m5x_95840382-8211-4637-92f6-8316e3e751c6/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:33:26 crc kubenswrapper[4624]: I1008 17:33:26.039695 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-857d49cc6c-6fh82_ba09f7ef-9520-4917-9fb7-642e8fb51be1/neutron-api/0.log" Oct 08 17:33:27 crc kubenswrapper[4624]: I1008 17:33:27.312619 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_cd75368d-b8a0-41e9-8e92-f5bcc7e91fb6/nova-cell0-conductor-conductor/0.log" Oct 08 17:33:28 crc kubenswrapper[4624]: I1008 17:33:28.250251 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5b3bb7e0-38ab-4767-98ed-c1a79a46851f/nova-cell1-conductor-conductor/0.log" Oct 08 17:33:28 crc kubenswrapper[4624]: I1008 17:33:28.704748 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7cee7d40-61a2-4b4b-87d9-531196e95a8d/nova-api-log/0.log" Oct 08 17:33:29 crc kubenswrapper[4624]: I1008 17:33:29.119957 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_166fd0ae-7c08-4abf-aad9-ec8c11629078/nova-cell1-novncproxy-novncproxy/0.log" Oct 08 17:33:29 crc kubenswrapper[4624]: I1008 17:33:29.461888 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-58j7v_ef49709c-d8c5-4ce4-b83e-eb7b84edbd4b/nova-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:33:29 crc kubenswrapper[4624]: I1008 17:33:29.866006 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cf362794-4a7e-483b-814b-d73b53e9f28f/nova-metadata-log/0.log" Oct 08 17:33:30 crc kubenswrapper[4624]: I1008 17:33:30.288713 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7cee7d40-61a2-4b4b-87d9-531196e95a8d/nova-api-api/0.log" Oct 08 17:33:30 crc kubenswrapper[4624]: I1008 17:33:30.465781 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:33:30 crc kubenswrapper[4624]: E1008 17:33:30.466090 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:33:30 crc kubenswrapper[4624]: I1008 17:33:30.867983 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_91c69013-9ea8-41d8-a439-c85e7ab45e06/mysql-bootstrap/0.log" Oct 08 17:33:31 crc kubenswrapper[4624]: I1008 17:33:31.193385 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_91c69013-9ea8-41d8-a439-c85e7ab45e06/mysql-bootstrap/0.log" Oct 08 17:33:31 crc kubenswrapper[4624]: I1008 17:33:31.424238 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_91c69013-9ea8-41d8-a439-c85e7ab45e06/galera/0.log" Oct 08 17:33:31 crc kubenswrapper[4624]: I1008 17:33:31.490386 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e7163bb9-301b-4539-ae0d-099caa9bd36b/nova-scheduler-scheduler/0.log" Oct 08 17:33:31 crc kubenswrapper[4624]: I1008 17:33:31.798935 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_aeedef5f-f3c5-41a3-9a36-bc3830eb12c7/mysql-bootstrap/0.log" Oct 08 17:33:32 crc kubenswrapper[4624]: I1008 17:33:32.122010 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_aeedef5f-f3c5-41a3-9a36-bc3830eb12c7/mysql-bootstrap/0.log" Oct 08 17:33:32 crc kubenswrapper[4624]: I1008 17:33:32.266619 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_aeedef5f-f3c5-41a3-9a36-bc3830eb12c7/galera/0.log" Oct 08 17:33:32 crc kubenswrapper[4624]: I1008 17:33:32.588662 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c515985c-9b57-4136-bf01-b872e9caaec9/openstackclient/0.log" Oct 08 17:33:32 crc kubenswrapper[4624]: I1008 17:33:32.945746 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-c4zfm_c5312bac-042b-48c5-bf82-1f565e25f11e/ovn-controller/0.log" Oct 08 17:33:33 crc kubenswrapper[4624]: I1008 17:33:33.277909 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-bm82v_0eec343f-e477-47d8-b651-2f5a2a944895/openstack-network-exporter/0.log" Oct 08 17:33:33 crc kubenswrapper[4624]: I1008 17:33:33.628260 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jhpjx_72a8ef8e-f40d-4206-af95-2f636093ed51/ovsdb-server-init/0.log" Oct 08 17:33:33 crc kubenswrapper[4624]: I1008 17:33:33.939150 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jhpjx_72a8ef8e-f40d-4206-af95-2f636093ed51/ovs-vswitchd/0.log" Oct 08 17:33:33 crc kubenswrapper[4624]: I1008 17:33:33.977781 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jhpjx_72a8ef8e-f40d-4206-af95-2f636093ed51/ovsdb-server-init/0.log" Oct 08 17:33:34 crc kubenswrapper[4624]: I1008 17:33:34.189474 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jhpjx_72a8ef8e-f40d-4206-af95-2f636093ed51/ovsdb-server/0.log" Oct 08 17:33:34 crc kubenswrapper[4624]: I1008 17:33:34.504972 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-t84hz_d6b31442-c459-4d7e-b828-90ffe6a2eda5/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:33:34 crc kubenswrapper[4624]: I1008 17:33:34.785116 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_226458e6-33a0-4123-8aaf-b3950a30d1c9/openstack-network-exporter/0.log" Oct 08 17:33:34 crc kubenswrapper[4624]: I1008 17:33:34.851601 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_226458e6-33a0-4123-8aaf-b3950a30d1c9/ovn-northd/0.log" Oct 08 17:33:35 crc kubenswrapper[4624]: I1008 17:33:35.202898 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_85e0be3f-32a4-42c9-9fe5-f3bfca740477/openstack-network-exporter/0.log" Oct 08 17:33:35 crc kubenswrapper[4624]: I1008 17:33:35.443956 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_85e0be3f-32a4-42c9-9fe5-f3bfca740477/ovsdbserver-nb/0.log" Oct 08 17:33:35 crc kubenswrapper[4624]: I1008 17:33:35.654072 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2799503e-5a0b-4631-9946-f335b8446b53/openstack-network-exporter/0.log" Oct 08 17:33:35 crc kubenswrapper[4624]: I1008 17:33:35.795287 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2799503e-5a0b-4631-9946-f335b8446b53/ovsdbserver-sb/0.log" Oct 08 17:33:36 crc kubenswrapper[4624]: I1008 17:33:36.671185 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7f4595c6f6-t4nns_cfd45b9a-e3a8-4f53-92bb-4e4dc0580365/placement-api/0.log" Oct 08 17:33:36 crc kubenswrapper[4624]: I1008 17:33:36.675424 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cf362794-4a7e-483b-814b-d73b53e9f28f/nova-metadata-metadata/0.log" Oct 08 17:33:36 crc kubenswrapper[4624]: I1008 17:33:36.922509 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7f4595c6f6-t4nns_cfd45b9a-e3a8-4f53-92bb-4e4dc0580365/placement-log/0.log" Oct 08 17:33:37 crc kubenswrapper[4624]: I1008 17:33:37.131238 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0795aa07-68f2-4a23-b388-1237f212f537/setup-container/0.log" Oct 08 17:33:37 crc kubenswrapper[4624]: I1008 17:33:37.331433 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0795aa07-68f2-4a23-b388-1237f212f537/setup-container/0.log" Oct 08 17:33:37 crc kubenswrapper[4624]: I1008 17:33:37.350106 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0795aa07-68f2-4a23-b388-1237f212f537/rabbitmq/0.log" Oct 08 17:33:37 crc kubenswrapper[4624]: I1008 17:33:37.609433 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9da75813-8748-41a3-8bea-bc7987ccc7a5/setup-container/0.log" Oct 08 17:33:38 crc kubenswrapper[4624]: I1008 17:33:38.120894 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9da75813-8748-41a3-8bea-bc7987ccc7a5/rabbitmq/0.log" Oct 08 17:33:38 crc kubenswrapper[4624]: I1008 17:33:38.123802 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9da75813-8748-41a3-8bea-bc7987ccc7a5/setup-container/0.log" Oct 08 17:33:38 crc kubenswrapper[4624]: I1008 17:33:38.425233 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_5c8ff67a-4da2-47d4-9f73-d7842cdf2712/memcached/0.log" Oct 08 17:33:38 crc kubenswrapper[4624]: I1008 17:33:38.462491 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-wl4hz_9a215cb1-d735-42d4-9cc9-698fa1a61508/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:33:38 crc kubenswrapper[4624]: I1008 17:33:38.473742 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-9rfls_eda4ead7-208d-4aed-9f74-ef58b401d591/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:33:38 crc kubenswrapper[4624]: I1008 17:33:38.673811 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-crdv8_89f05ed9-3980-46e8-96b7-ef08d01f09ee/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:33:38 crc kubenswrapper[4624]: I1008 17:33:38.721098 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-hcl7k_e856333d-f205-4d9a-881c-5b3364b5ddb5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:33:39 crc kubenswrapper[4624]: I1008 17:33:39.228064 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-n95kc_054c88fe-e5ae-4274-b497-a3e583b40594/ssh-known-hosts-edpm-deployment/0.log" Oct 08 17:33:39 crc kubenswrapper[4624]: I1008 17:33:39.491534 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-59d86bf959-vq2ld_d7c08e42-5aca-4394-952c-5649ba096a8f/proxy-server/0.log" Oct 08 17:33:39 crc kubenswrapper[4624]: I1008 17:33:39.641803 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-59d86bf959-vq2ld_d7c08e42-5aca-4394-952c-5649ba096a8f/proxy-httpd/0.log" Oct 08 17:33:39 crc kubenswrapper[4624]: I1008 17:33:39.647260 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-v4bjq_d112c8ce-f2c4-43a1-9ae8-e155473d5831/swift-ring-rebalance/0.log" Oct 08 17:33:39 crc kubenswrapper[4624]: I1008 17:33:39.801797 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/account-auditor/0.log" Oct 08 17:33:39 crc kubenswrapper[4624]: I1008 17:33:39.893923 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/account-reaper/0.log" Oct 08 17:33:39 crc kubenswrapper[4624]: I1008 17:33:39.930736 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/account-replicator/0.log" Oct 08 17:33:40 crc kubenswrapper[4624]: I1008 17:33:40.036892 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/container-auditor/0.log" Oct 08 17:33:40 crc kubenswrapper[4624]: I1008 17:33:40.075018 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/account-server/0.log" Oct 08 17:33:40 crc kubenswrapper[4624]: I1008 17:33:40.162321 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/container-replicator/0.log" Oct 08 17:33:40 crc kubenswrapper[4624]: I1008 17:33:40.179704 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/container-server/0.log" Oct 08 17:33:40 crc kubenswrapper[4624]: I1008 17:33:40.272396 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/container-updater/0.log" Oct 08 17:33:40 crc kubenswrapper[4624]: I1008 17:33:40.374552 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/object-auditor/0.log" Oct 08 17:33:40 crc kubenswrapper[4624]: I1008 17:33:40.432218 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/object-expirer/0.log" Oct 08 17:33:40 crc kubenswrapper[4624]: I1008 17:33:40.492384 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/object-replicator/0.log" Oct 08 17:33:40 crc kubenswrapper[4624]: I1008 17:33:40.540926 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/object-server/0.log" Oct 08 17:33:40 crc kubenswrapper[4624]: I1008 17:33:40.660050 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/object-updater/0.log" Oct 08 17:33:40 crc kubenswrapper[4624]: I1008 17:33:40.690484 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/rsync/0.log" Oct 08 17:33:40 crc kubenswrapper[4624]: I1008 17:33:40.749001 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a6d0a5f4-de63-4141-addf-72f5d787cb24/swift-recon-cron/0.log" Oct 08 17:33:40 crc kubenswrapper[4624]: I1008 17:33:40.995197 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-lccvq_9974fb02-7840-402c-af16-db4392849c73/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:33:41 crc kubenswrapper[4624]: I1008 17:33:41.112446 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-multi-thread-testing_391ff9a0-631c-4520-a9f9-80fda37e32a1/tempest-tests-tempest-tests-runner/0.log" Oct 08 17:33:41 crc kubenswrapper[4624]: I1008 17:33:41.319492 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_7f1dbdcf-5dcf-41dd-ae17-1683a095921c/test-operator-logs-container/0.log" Oct 08 17:33:41 crc kubenswrapper[4624]: I1008 17:33:41.325706 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-thread-testing_fb994804-3cd4-4414-912f-a01613418132/tempest-tests-tempest-tests-runner/0.log" Oct 08 17:33:41 crc kubenswrapper[4624]: I1008 17:33:41.536417 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-cbght_ceac8d98-9a63-4c7d-876b-8d7e4acf59c4/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 08 17:33:45 crc kubenswrapper[4624]: I1008 17:33:45.474289 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:33:45 crc kubenswrapper[4624]: E1008 17:33:45.475618 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:33:58 crc kubenswrapper[4624]: I1008 17:33:58.466023 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:33:58 crc kubenswrapper[4624]: E1008 17:33:58.466886 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:34:09 crc kubenswrapper[4624]: I1008 17:34:09.465853 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:34:09 crc kubenswrapper[4624]: E1008 17:34:09.467515 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:34:19 crc kubenswrapper[4624]: I1008 17:34:19.116885 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ldnf6"] Oct 08 17:34:19 crc kubenswrapper[4624]: E1008 17:34:19.128163 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d60f5b4-6d9a-4d08-bd7b-b158af67edd2" containerName="extract-utilities" Oct 08 17:34:19 crc kubenswrapper[4624]: I1008 17:34:19.128207 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d60f5b4-6d9a-4d08-bd7b-b158af67edd2" containerName="extract-utilities" Oct 08 17:34:19 crc kubenswrapper[4624]: E1008 17:34:19.128243 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d60f5b4-6d9a-4d08-bd7b-b158af67edd2" containerName="registry-server" Oct 08 17:34:19 crc kubenswrapper[4624]: I1008 17:34:19.128250 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d60f5b4-6d9a-4d08-bd7b-b158af67edd2" containerName="registry-server" Oct 08 17:34:19 crc kubenswrapper[4624]: E1008 17:34:19.128276 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d60f5b4-6d9a-4d08-bd7b-b158af67edd2" containerName="extract-content" Oct 08 17:34:19 crc kubenswrapper[4624]: I1008 17:34:19.128282 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d60f5b4-6d9a-4d08-bd7b-b158af67edd2" containerName="extract-content" Oct 08 17:34:19 crc kubenswrapper[4624]: I1008 17:34:19.132439 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d60f5b4-6d9a-4d08-bd7b-b158af67edd2" containerName="registry-server" Oct 08 17:34:19 crc kubenswrapper[4624]: I1008 17:34:19.143090 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ldnf6" Oct 08 17:34:19 crc kubenswrapper[4624]: I1008 17:34:19.196320 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ldnf6"] Oct 08 17:34:19 crc kubenswrapper[4624]: I1008 17:34:19.308753 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdtvv\" (UniqueName: \"kubernetes.io/projected/6d5b3b7a-c372-4247-b8fe-7d8149b04c96-kube-api-access-tdtvv\") pod \"redhat-marketplace-ldnf6\" (UID: \"6d5b3b7a-c372-4247-b8fe-7d8149b04c96\") " pod="openshift-marketplace/redhat-marketplace-ldnf6" Oct 08 17:34:19 crc kubenswrapper[4624]: I1008 17:34:19.309264 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5b3b7a-c372-4247-b8fe-7d8149b04c96-utilities\") pod \"redhat-marketplace-ldnf6\" (UID: \"6d5b3b7a-c372-4247-b8fe-7d8149b04c96\") " pod="openshift-marketplace/redhat-marketplace-ldnf6" Oct 08 17:34:19 crc kubenswrapper[4624]: I1008 17:34:19.309591 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5b3b7a-c372-4247-b8fe-7d8149b04c96-catalog-content\") pod \"redhat-marketplace-ldnf6\" (UID: \"6d5b3b7a-c372-4247-b8fe-7d8149b04c96\") " pod="openshift-marketplace/redhat-marketplace-ldnf6" Oct 08 17:34:19 crc kubenswrapper[4624]: I1008 17:34:19.411725 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdtvv\" (UniqueName: \"kubernetes.io/projected/6d5b3b7a-c372-4247-b8fe-7d8149b04c96-kube-api-access-tdtvv\") pod \"redhat-marketplace-ldnf6\" (UID: \"6d5b3b7a-c372-4247-b8fe-7d8149b04c96\") " pod="openshift-marketplace/redhat-marketplace-ldnf6" Oct 08 17:34:19 crc kubenswrapper[4624]: I1008 17:34:19.411873 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5b3b7a-c372-4247-b8fe-7d8149b04c96-utilities\") pod \"redhat-marketplace-ldnf6\" (UID: \"6d5b3b7a-c372-4247-b8fe-7d8149b04c96\") " pod="openshift-marketplace/redhat-marketplace-ldnf6" Oct 08 17:34:19 crc kubenswrapper[4624]: I1008 17:34:19.411904 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5b3b7a-c372-4247-b8fe-7d8149b04c96-catalog-content\") pod \"redhat-marketplace-ldnf6\" (UID: \"6d5b3b7a-c372-4247-b8fe-7d8149b04c96\") " pod="openshift-marketplace/redhat-marketplace-ldnf6" Oct 08 17:34:19 crc kubenswrapper[4624]: I1008 17:34:19.412933 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5b3b7a-c372-4247-b8fe-7d8149b04c96-catalog-content\") pod \"redhat-marketplace-ldnf6\" (UID: \"6d5b3b7a-c372-4247-b8fe-7d8149b04c96\") " pod="openshift-marketplace/redhat-marketplace-ldnf6" Oct 08 17:34:19 crc kubenswrapper[4624]: I1008 17:34:19.413011 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5b3b7a-c372-4247-b8fe-7d8149b04c96-utilities\") pod \"redhat-marketplace-ldnf6\" (UID: \"6d5b3b7a-c372-4247-b8fe-7d8149b04c96\") " pod="openshift-marketplace/redhat-marketplace-ldnf6" Oct 08 17:34:19 crc kubenswrapper[4624]: I1008 17:34:19.476131 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdtvv\" (UniqueName: \"kubernetes.io/projected/6d5b3b7a-c372-4247-b8fe-7d8149b04c96-kube-api-access-tdtvv\") pod \"redhat-marketplace-ldnf6\" (UID: \"6d5b3b7a-c372-4247-b8fe-7d8149b04c96\") " pod="openshift-marketplace/redhat-marketplace-ldnf6" Oct 08 17:34:19 crc kubenswrapper[4624]: I1008 17:34:19.484512 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ldnf6" Oct 08 17:34:20 crc kubenswrapper[4624]: I1008 17:34:20.466941 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:34:20 crc kubenswrapper[4624]: E1008 17:34:20.467920 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:34:20 crc kubenswrapper[4624]: I1008 17:34:20.865769 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ldnf6"] Oct 08 17:34:21 crc kubenswrapper[4624]: I1008 17:34:21.313505 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ldnf6" event={"ID":"6d5b3b7a-c372-4247-b8fe-7d8149b04c96","Type":"ContainerDied","Data":"9772ead122930dff0e70d03a23088d18872076e084af5603b13f1a1ec468d465"} Oct 08 17:34:21 crc kubenswrapper[4624]: I1008 17:34:21.313679 4624 generic.go:334] "Generic (PLEG): container finished" podID="6d5b3b7a-c372-4247-b8fe-7d8149b04c96" containerID="9772ead122930dff0e70d03a23088d18872076e084af5603b13f1a1ec468d465" exitCode=0 Oct 08 17:34:21 crc kubenswrapper[4624]: I1008 17:34:21.313756 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ldnf6" event={"ID":"6d5b3b7a-c372-4247-b8fe-7d8149b04c96","Type":"ContainerStarted","Data":"958bc0b0c720bfde9d04206501df09e20d93efd44993155c65f88a820b35da95"} Oct 08 17:34:24 crc kubenswrapper[4624]: I1008 17:34:24.358036 4624 generic.go:334] "Generic (PLEG): container finished" podID="6d5b3b7a-c372-4247-b8fe-7d8149b04c96" containerID="949f65ea08f170dfb9b5f8210b5f9d4fc9af06d9b5d263483872f2d820524199" exitCode=0 Oct 08 17:34:24 crc kubenswrapper[4624]: I1008 17:34:24.358119 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ldnf6" event={"ID":"6d5b3b7a-c372-4247-b8fe-7d8149b04c96","Type":"ContainerDied","Data":"949f65ea08f170dfb9b5f8210b5f9d4fc9af06d9b5d263483872f2d820524199"} Oct 08 17:34:25 crc kubenswrapper[4624]: I1008 17:34:25.371991 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ldnf6" event={"ID":"6d5b3b7a-c372-4247-b8fe-7d8149b04c96","Type":"ContainerStarted","Data":"d4dd80c4e9a83ec1ca19be68151ad2820afc4c539c743322aa09cbee5ea79bb7"} Oct 08 17:34:25 crc kubenswrapper[4624]: I1008 17:34:25.393504 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ldnf6" podStartSLOduration=2.896000744 podStartE2EDuration="6.393466129s" podCreationTimestamp="2025-10-08 17:34:19 +0000 UTC" firstStartedPulling="2025-10-08 17:34:21.318125509 +0000 UTC m=+11486.469060626" lastFinishedPulling="2025-10-08 17:34:24.815590944 +0000 UTC m=+11489.966526011" observedRunningTime="2025-10-08 17:34:25.391425967 +0000 UTC m=+11490.542361064" watchObservedRunningTime="2025-10-08 17:34:25.393466129 +0000 UTC m=+11490.544401206" Oct 08 17:34:27 crc kubenswrapper[4624]: I1008 17:34:27.396034 4624 generic.go:334] "Generic (PLEG): container finished" podID="d93b1a54-a6d2-435c-8cd2-dbc2125f7a89" containerID="9ddb6a9fd38bf8bfc94f0094f00dec9d8011e0f86521f43b0556085d15d118c9" exitCode=0 Oct 08 17:34:27 crc kubenswrapper[4624]: I1008 17:34:27.396089 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l49pj/crc-debug-4wlqq" event={"ID":"d93b1a54-a6d2-435c-8cd2-dbc2125f7a89","Type":"ContainerDied","Data":"9ddb6a9fd38bf8bfc94f0094f00dec9d8011e0f86521f43b0556085d15d118c9"} Oct 08 17:34:28 crc kubenswrapper[4624]: I1008 17:34:28.548300 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l49pj/crc-debug-4wlqq" Oct 08 17:34:28 crc kubenswrapper[4624]: I1008 17:34:28.585138 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l49pj/crc-debug-4wlqq"] Oct 08 17:34:28 crc kubenswrapper[4624]: I1008 17:34:28.593710 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l49pj/crc-debug-4wlqq"] Oct 08 17:34:28 crc kubenswrapper[4624]: I1008 17:34:28.621184 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d93b1a54-a6d2-435c-8cd2-dbc2125f7a89-host\") pod \"d93b1a54-a6d2-435c-8cd2-dbc2125f7a89\" (UID: \"d93b1a54-a6d2-435c-8cd2-dbc2125f7a89\") " Oct 08 17:34:28 crc kubenswrapper[4624]: I1008 17:34:28.621286 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h476v\" (UniqueName: \"kubernetes.io/projected/d93b1a54-a6d2-435c-8cd2-dbc2125f7a89-kube-api-access-h476v\") pod \"d93b1a54-a6d2-435c-8cd2-dbc2125f7a89\" (UID: \"d93b1a54-a6d2-435c-8cd2-dbc2125f7a89\") " Oct 08 17:34:28 crc kubenswrapper[4624]: I1008 17:34:28.621987 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d93b1a54-a6d2-435c-8cd2-dbc2125f7a89-host" (OuterVolumeSpecName: "host") pod "d93b1a54-a6d2-435c-8cd2-dbc2125f7a89" (UID: "d93b1a54-a6d2-435c-8cd2-dbc2125f7a89"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 17:34:28 crc kubenswrapper[4624]: I1008 17:34:28.632839 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d93b1a54-a6d2-435c-8cd2-dbc2125f7a89-kube-api-access-h476v" (OuterVolumeSpecName: "kube-api-access-h476v") pod "d93b1a54-a6d2-435c-8cd2-dbc2125f7a89" (UID: "d93b1a54-a6d2-435c-8cd2-dbc2125f7a89"). InnerVolumeSpecName "kube-api-access-h476v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:34:28 crc kubenswrapper[4624]: I1008 17:34:28.723719 4624 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d93b1a54-a6d2-435c-8cd2-dbc2125f7a89-host\") on node \"crc\" DevicePath \"\"" Oct 08 17:34:28 crc kubenswrapper[4624]: I1008 17:34:28.723761 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h476v\" (UniqueName: \"kubernetes.io/projected/d93b1a54-a6d2-435c-8cd2-dbc2125f7a89-kube-api-access-h476v\") on node \"crc\" DevicePath \"\"" Oct 08 17:34:29 crc kubenswrapper[4624]: I1008 17:34:29.426876 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l49pj/crc-debug-4wlqq" Oct 08 17:34:29 crc kubenswrapper[4624]: I1008 17:34:29.428812 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32c33bacd640fb939805845cd0dcf6cfffb40fbdfb10bc78234ddc2b2177ecfd" Oct 08 17:34:29 crc kubenswrapper[4624]: I1008 17:34:29.481419 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d93b1a54-a6d2-435c-8cd2-dbc2125f7a89" path="/var/lib/kubelet/pods/d93b1a54-a6d2-435c-8cd2-dbc2125f7a89/volumes" Oct 08 17:34:29 crc kubenswrapper[4624]: I1008 17:34:29.486751 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ldnf6" Oct 08 17:34:29 crc kubenswrapper[4624]: I1008 17:34:29.486784 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ldnf6" Oct 08 17:34:29 crc kubenswrapper[4624]: I1008 17:34:29.558175 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ldnf6" Oct 08 17:34:29 crc kubenswrapper[4624]: I1008 17:34:29.796096 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l49pj/crc-debug-zw9b7"] Oct 08 17:34:29 crc kubenswrapper[4624]: E1008 17:34:29.796717 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93b1a54-a6d2-435c-8cd2-dbc2125f7a89" containerName="container-00" Oct 08 17:34:29 crc kubenswrapper[4624]: I1008 17:34:29.796750 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93b1a54-a6d2-435c-8cd2-dbc2125f7a89" containerName="container-00" Oct 08 17:34:29 crc kubenswrapper[4624]: I1008 17:34:29.797041 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d93b1a54-a6d2-435c-8cd2-dbc2125f7a89" containerName="container-00" Oct 08 17:34:29 crc kubenswrapper[4624]: I1008 17:34:29.798033 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l49pj/crc-debug-zw9b7" Oct 08 17:34:29 crc kubenswrapper[4624]: I1008 17:34:29.818150 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-l49pj"/"default-dockercfg-vm864" Oct 08 17:34:29 crc kubenswrapper[4624]: I1008 17:34:29.948842 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/076e4ccc-ffeb-4578-accb-47c3efee8b71-host\") pod \"crc-debug-zw9b7\" (UID: \"076e4ccc-ffeb-4578-accb-47c3efee8b71\") " pod="openshift-must-gather-l49pj/crc-debug-zw9b7" Oct 08 17:34:29 crc kubenswrapper[4624]: I1008 17:34:29.949202 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2qq4\" (UniqueName: \"kubernetes.io/projected/076e4ccc-ffeb-4578-accb-47c3efee8b71-kube-api-access-x2qq4\") pod \"crc-debug-zw9b7\" (UID: \"076e4ccc-ffeb-4578-accb-47c3efee8b71\") " pod="openshift-must-gather-l49pj/crc-debug-zw9b7" Oct 08 17:34:30 crc kubenswrapper[4624]: I1008 17:34:30.051551 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2qq4\" (UniqueName: \"kubernetes.io/projected/076e4ccc-ffeb-4578-accb-47c3efee8b71-kube-api-access-x2qq4\") pod \"crc-debug-zw9b7\" (UID: \"076e4ccc-ffeb-4578-accb-47c3efee8b71\") " pod="openshift-must-gather-l49pj/crc-debug-zw9b7" Oct 08 17:34:30 crc kubenswrapper[4624]: I1008 17:34:30.051881 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/076e4ccc-ffeb-4578-accb-47c3efee8b71-host\") pod \"crc-debug-zw9b7\" (UID: \"076e4ccc-ffeb-4578-accb-47c3efee8b71\") " pod="openshift-must-gather-l49pj/crc-debug-zw9b7" Oct 08 17:34:30 crc kubenswrapper[4624]: I1008 17:34:30.052104 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/076e4ccc-ffeb-4578-accb-47c3efee8b71-host\") pod \"crc-debug-zw9b7\" (UID: \"076e4ccc-ffeb-4578-accb-47c3efee8b71\") " pod="openshift-must-gather-l49pj/crc-debug-zw9b7" Oct 08 17:34:30 crc kubenswrapper[4624]: I1008 17:34:30.084118 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2qq4\" (UniqueName: \"kubernetes.io/projected/076e4ccc-ffeb-4578-accb-47c3efee8b71-kube-api-access-x2qq4\") pod \"crc-debug-zw9b7\" (UID: \"076e4ccc-ffeb-4578-accb-47c3efee8b71\") " pod="openshift-must-gather-l49pj/crc-debug-zw9b7" Oct 08 17:34:30 crc kubenswrapper[4624]: I1008 17:34:30.124233 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l49pj/crc-debug-zw9b7" Oct 08 17:34:30 crc kubenswrapper[4624]: I1008 17:34:30.436354 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l49pj/crc-debug-zw9b7" event={"ID":"076e4ccc-ffeb-4578-accb-47c3efee8b71","Type":"ContainerStarted","Data":"c23dd2105fdc31e70095f697493060496b150aba34e4de6887a553563f4b8a0b"} Oct 08 17:34:30 crc kubenswrapper[4624]: I1008 17:34:30.436418 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l49pj/crc-debug-zw9b7" event={"ID":"076e4ccc-ffeb-4578-accb-47c3efee8b71","Type":"ContainerStarted","Data":"0788aafbce30eaf0207abb77f2646d915f812a708fd314223fffa329dc8b0565"} Oct 08 17:34:30 crc kubenswrapper[4624]: I1008 17:34:30.462773 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-l49pj/crc-debug-zw9b7" podStartSLOduration=1.462750191 podStartE2EDuration="1.462750191s" podCreationTimestamp="2025-10-08 17:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 17:34:30.456965544 +0000 UTC m=+11495.607900631" watchObservedRunningTime="2025-10-08 17:34:30.462750191 +0000 UTC m=+11495.613685278" Oct 08 17:34:30 crc kubenswrapper[4624]: I1008 17:34:30.533974 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ldnf6" Oct 08 17:34:30 crc kubenswrapper[4624]: I1008 17:34:30.590282 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ldnf6"] Oct 08 17:34:32 crc kubenswrapper[4624]: I1008 17:34:32.453752 4624 generic.go:334] "Generic (PLEG): container finished" podID="076e4ccc-ffeb-4578-accb-47c3efee8b71" containerID="c23dd2105fdc31e70095f697493060496b150aba34e4de6887a553563f4b8a0b" exitCode=0 Oct 08 17:34:32 crc kubenswrapper[4624]: I1008 17:34:32.455700 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ldnf6" podUID="6d5b3b7a-c372-4247-b8fe-7d8149b04c96" containerName="registry-server" containerID="cri-o://d4dd80c4e9a83ec1ca19be68151ad2820afc4c539c743322aa09cbee5ea79bb7" gracePeriod=2 Oct 08 17:34:32 crc kubenswrapper[4624]: I1008 17:34:32.455920 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l49pj/crc-debug-zw9b7" event={"ID":"076e4ccc-ffeb-4578-accb-47c3efee8b71","Type":"ContainerDied","Data":"c23dd2105fdc31e70095f697493060496b150aba34e4de6887a553563f4b8a0b"} Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.061825 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ldnf6" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.129411 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5b3b7a-c372-4247-b8fe-7d8149b04c96-utilities\") pod \"6d5b3b7a-c372-4247-b8fe-7d8149b04c96\" (UID: \"6d5b3b7a-c372-4247-b8fe-7d8149b04c96\") " Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.129591 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5b3b7a-c372-4247-b8fe-7d8149b04c96-catalog-content\") pod \"6d5b3b7a-c372-4247-b8fe-7d8149b04c96\" (UID: \"6d5b3b7a-c372-4247-b8fe-7d8149b04c96\") " Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.129668 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdtvv\" (UniqueName: \"kubernetes.io/projected/6d5b3b7a-c372-4247-b8fe-7d8149b04c96-kube-api-access-tdtvv\") pod \"6d5b3b7a-c372-4247-b8fe-7d8149b04c96\" (UID: \"6d5b3b7a-c372-4247-b8fe-7d8149b04c96\") " Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.130782 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d5b3b7a-c372-4247-b8fe-7d8149b04c96-utilities" (OuterVolumeSpecName: "utilities") pod "6d5b3b7a-c372-4247-b8fe-7d8149b04c96" (UID: "6d5b3b7a-c372-4247-b8fe-7d8149b04c96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.145192 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d5b3b7a-c372-4247-b8fe-7d8149b04c96-kube-api-access-tdtvv" (OuterVolumeSpecName: "kube-api-access-tdtvv") pod "6d5b3b7a-c372-4247-b8fe-7d8149b04c96" (UID: "6d5b3b7a-c372-4247-b8fe-7d8149b04c96"). InnerVolumeSpecName "kube-api-access-tdtvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.152493 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d5b3b7a-c372-4247-b8fe-7d8149b04c96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d5b3b7a-c372-4247-b8fe-7d8149b04c96" (UID: "6d5b3b7a-c372-4247-b8fe-7d8149b04c96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.233711 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5b3b7a-c372-4247-b8fe-7d8149b04c96-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.233745 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5b3b7a-c372-4247-b8fe-7d8149b04c96-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.233760 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdtvv\" (UniqueName: \"kubernetes.io/projected/6d5b3b7a-c372-4247-b8fe-7d8149b04c96-kube-api-access-tdtvv\") on node \"crc\" DevicePath \"\"" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.466267 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:34:33 crc kubenswrapper[4624]: E1008 17:34:33.466552 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.469280 4624 generic.go:334] "Generic (PLEG): container finished" podID="6d5b3b7a-c372-4247-b8fe-7d8149b04c96" containerID="d4dd80c4e9a83ec1ca19be68151ad2820afc4c539c743322aa09cbee5ea79bb7" exitCode=0 Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.469381 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ldnf6" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.478536 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ldnf6" event={"ID":"6d5b3b7a-c372-4247-b8fe-7d8149b04c96","Type":"ContainerDied","Data":"d4dd80c4e9a83ec1ca19be68151ad2820afc4c539c743322aa09cbee5ea79bb7"} Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.478583 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ldnf6" event={"ID":"6d5b3b7a-c372-4247-b8fe-7d8149b04c96","Type":"ContainerDied","Data":"958bc0b0c720bfde9d04206501df09e20d93efd44993155c65f88a820b35da95"} Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.478604 4624 scope.go:117] "RemoveContainer" containerID="d4dd80c4e9a83ec1ca19be68151ad2820afc4c539c743322aa09cbee5ea79bb7" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.531358 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l49pj/crc-debug-zw9b7" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.554462 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ldnf6"] Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.563797 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ldnf6"] Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.569012 4624 scope.go:117] "RemoveContainer" containerID="949f65ea08f170dfb9b5f8210b5f9d4fc9af06d9b5d263483872f2d820524199" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.630868 4624 scope.go:117] "RemoveContainer" containerID="9772ead122930dff0e70d03a23088d18872076e084af5603b13f1a1ec468d465" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.642419 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2qq4\" (UniqueName: \"kubernetes.io/projected/076e4ccc-ffeb-4578-accb-47c3efee8b71-kube-api-access-x2qq4\") pod \"076e4ccc-ffeb-4578-accb-47c3efee8b71\" (UID: \"076e4ccc-ffeb-4578-accb-47c3efee8b71\") " Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.642739 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/076e4ccc-ffeb-4578-accb-47c3efee8b71-host\") pod \"076e4ccc-ffeb-4578-accb-47c3efee8b71\" (UID: \"076e4ccc-ffeb-4578-accb-47c3efee8b71\") " Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.643217 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/076e4ccc-ffeb-4578-accb-47c3efee8b71-host" (OuterVolumeSpecName: "host") pod "076e4ccc-ffeb-4578-accb-47c3efee8b71" (UID: "076e4ccc-ffeb-4578-accb-47c3efee8b71"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.647597 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/076e4ccc-ffeb-4578-accb-47c3efee8b71-kube-api-access-x2qq4" (OuterVolumeSpecName: "kube-api-access-x2qq4") pod "076e4ccc-ffeb-4578-accb-47c3efee8b71" (UID: "076e4ccc-ffeb-4578-accb-47c3efee8b71"). InnerVolumeSpecName "kube-api-access-x2qq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.745381 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2qq4\" (UniqueName: \"kubernetes.io/projected/076e4ccc-ffeb-4578-accb-47c3efee8b71-kube-api-access-x2qq4\") on node \"crc\" DevicePath \"\"" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.745420 4624 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/076e4ccc-ffeb-4578-accb-47c3efee8b71-host\") on node \"crc\" DevicePath \"\"" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.792889 4624 scope.go:117] "RemoveContainer" containerID="d4dd80c4e9a83ec1ca19be68151ad2820afc4c539c743322aa09cbee5ea79bb7" Oct 08 17:34:33 crc kubenswrapper[4624]: E1008 17:34:33.802419 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4dd80c4e9a83ec1ca19be68151ad2820afc4c539c743322aa09cbee5ea79bb7\": container with ID starting with d4dd80c4e9a83ec1ca19be68151ad2820afc4c539c743322aa09cbee5ea79bb7 not found: ID does not exist" containerID="d4dd80c4e9a83ec1ca19be68151ad2820afc4c539c743322aa09cbee5ea79bb7" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.802470 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4dd80c4e9a83ec1ca19be68151ad2820afc4c539c743322aa09cbee5ea79bb7"} err="failed to get container status \"d4dd80c4e9a83ec1ca19be68151ad2820afc4c539c743322aa09cbee5ea79bb7\": rpc error: code = NotFound desc = could not find container \"d4dd80c4e9a83ec1ca19be68151ad2820afc4c539c743322aa09cbee5ea79bb7\": container with ID starting with d4dd80c4e9a83ec1ca19be68151ad2820afc4c539c743322aa09cbee5ea79bb7 not found: ID does not exist" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.802507 4624 scope.go:117] "RemoveContainer" containerID="949f65ea08f170dfb9b5f8210b5f9d4fc9af06d9b5d263483872f2d820524199" Oct 08 17:34:33 crc kubenswrapper[4624]: E1008 17:34:33.804337 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"949f65ea08f170dfb9b5f8210b5f9d4fc9af06d9b5d263483872f2d820524199\": container with ID starting with 949f65ea08f170dfb9b5f8210b5f9d4fc9af06d9b5d263483872f2d820524199 not found: ID does not exist" containerID="949f65ea08f170dfb9b5f8210b5f9d4fc9af06d9b5d263483872f2d820524199" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.804405 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"949f65ea08f170dfb9b5f8210b5f9d4fc9af06d9b5d263483872f2d820524199"} err="failed to get container status \"949f65ea08f170dfb9b5f8210b5f9d4fc9af06d9b5d263483872f2d820524199\": rpc error: code = NotFound desc = could not find container \"949f65ea08f170dfb9b5f8210b5f9d4fc9af06d9b5d263483872f2d820524199\": container with ID starting with 949f65ea08f170dfb9b5f8210b5f9d4fc9af06d9b5d263483872f2d820524199 not found: ID does not exist" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.804442 4624 scope.go:117] "RemoveContainer" containerID="9772ead122930dff0e70d03a23088d18872076e084af5603b13f1a1ec468d465" Oct 08 17:34:33 crc kubenswrapper[4624]: E1008 17:34:33.805305 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9772ead122930dff0e70d03a23088d18872076e084af5603b13f1a1ec468d465\": container with ID starting with 9772ead122930dff0e70d03a23088d18872076e084af5603b13f1a1ec468d465 not found: ID does not exist" containerID="9772ead122930dff0e70d03a23088d18872076e084af5603b13f1a1ec468d465" Oct 08 17:34:33 crc kubenswrapper[4624]: I1008 17:34:33.805334 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9772ead122930dff0e70d03a23088d18872076e084af5603b13f1a1ec468d465"} err="failed to get container status \"9772ead122930dff0e70d03a23088d18872076e084af5603b13f1a1ec468d465\": rpc error: code = NotFound desc = could not find container \"9772ead122930dff0e70d03a23088d18872076e084af5603b13f1a1ec468d465\": container with ID starting with 9772ead122930dff0e70d03a23088d18872076e084af5603b13f1a1ec468d465 not found: ID does not exist" Oct 08 17:34:34 crc kubenswrapper[4624]: I1008 17:34:34.480725 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l49pj/crc-debug-zw9b7" Oct 08 17:34:34 crc kubenswrapper[4624]: I1008 17:34:34.480947 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l49pj/crc-debug-zw9b7" event={"ID":"076e4ccc-ffeb-4578-accb-47c3efee8b71","Type":"ContainerDied","Data":"0788aafbce30eaf0207abb77f2646d915f812a708fd314223fffa329dc8b0565"} Oct 08 17:34:34 crc kubenswrapper[4624]: I1008 17:34:34.480990 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0788aafbce30eaf0207abb77f2646d915f812a708fd314223fffa329dc8b0565" Oct 08 17:34:35 crc kubenswrapper[4624]: I1008 17:34:35.478819 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d5b3b7a-c372-4247-b8fe-7d8149b04c96" path="/var/lib/kubelet/pods/6d5b3b7a-c372-4247-b8fe-7d8149b04c96/volumes" Oct 08 17:34:41 crc kubenswrapper[4624]: I1008 17:34:41.495786 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l49pj/crc-debug-zw9b7"] Oct 08 17:34:41 crc kubenswrapper[4624]: I1008 17:34:41.510242 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l49pj/crc-debug-zw9b7"] Oct 08 17:34:42 crc kubenswrapper[4624]: I1008 17:34:42.961680 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l49pj/crc-debug-cj9vf"] Oct 08 17:34:42 crc kubenswrapper[4624]: E1008 17:34:42.962257 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5b3b7a-c372-4247-b8fe-7d8149b04c96" containerName="registry-server" Oct 08 17:34:42 crc kubenswrapper[4624]: I1008 17:34:42.962272 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5b3b7a-c372-4247-b8fe-7d8149b04c96" containerName="registry-server" Oct 08 17:34:42 crc kubenswrapper[4624]: E1008 17:34:42.962327 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="076e4ccc-ffeb-4578-accb-47c3efee8b71" containerName="container-00" Oct 08 17:34:42 crc kubenswrapper[4624]: I1008 17:34:42.962335 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="076e4ccc-ffeb-4578-accb-47c3efee8b71" containerName="container-00" Oct 08 17:34:42 crc kubenswrapper[4624]: E1008 17:34:42.962354 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5b3b7a-c372-4247-b8fe-7d8149b04c96" containerName="extract-content" Oct 08 17:34:42 crc kubenswrapper[4624]: I1008 17:34:42.962360 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5b3b7a-c372-4247-b8fe-7d8149b04c96" containerName="extract-content" Oct 08 17:34:42 crc kubenswrapper[4624]: E1008 17:34:42.962378 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5b3b7a-c372-4247-b8fe-7d8149b04c96" containerName="extract-utilities" Oct 08 17:34:42 crc kubenswrapper[4624]: I1008 17:34:42.962385 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5b3b7a-c372-4247-b8fe-7d8149b04c96" containerName="extract-utilities" Oct 08 17:34:42 crc kubenswrapper[4624]: I1008 17:34:42.962603 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="076e4ccc-ffeb-4578-accb-47c3efee8b71" containerName="container-00" Oct 08 17:34:42 crc kubenswrapper[4624]: I1008 17:34:42.962663 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d5b3b7a-c372-4247-b8fe-7d8149b04c96" containerName="registry-server" Oct 08 17:34:42 crc kubenswrapper[4624]: I1008 17:34:42.963561 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l49pj/crc-debug-cj9vf" Oct 08 17:34:42 crc kubenswrapper[4624]: I1008 17:34:42.967204 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-l49pj"/"default-dockercfg-vm864" Oct 08 17:34:43 crc kubenswrapper[4624]: I1008 17:34:43.156784 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qblg\" (UniqueName: \"kubernetes.io/projected/256c3921-d28a-45ea-9cd4-48440a58bfa8-kube-api-access-5qblg\") pod \"crc-debug-cj9vf\" (UID: \"256c3921-d28a-45ea-9cd4-48440a58bfa8\") " pod="openshift-must-gather-l49pj/crc-debug-cj9vf" Oct 08 17:34:43 crc kubenswrapper[4624]: I1008 17:34:43.157545 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/256c3921-d28a-45ea-9cd4-48440a58bfa8-host\") pod \"crc-debug-cj9vf\" (UID: \"256c3921-d28a-45ea-9cd4-48440a58bfa8\") " pod="openshift-must-gather-l49pj/crc-debug-cj9vf" Oct 08 17:34:43 crc kubenswrapper[4624]: I1008 17:34:43.259405 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qblg\" (UniqueName: \"kubernetes.io/projected/256c3921-d28a-45ea-9cd4-48440a58bfa8-kube-api-access-5qblg\") pod \"crc-debug-cj9vf\" (UID: \"256c3921-d28a-45ea-9cd4-48440a58bfa8\") " pod="openshift-must-gather-l49pj/crc-debug-cj9vf" Oct 08 17:34:43 crc kubenswrapper[4624]: I1008 17:34:43.259669 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/256c3921-d28a-45ea-9cd4-48440a58bfa8-host\") pod \"crc-debug-cj9vf\" (UID: \"256c3921-d28a-45ea-9cd4-48440a58bfa8\") " pod="openshift-must-gather-l49pj/crc-debug-cj9vf" Oct 08 17:34:43 crc kubenswrapper[4624]: I1008 17:34:43.259851 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/256c3921-d28a-45ea-9cd4-48440a58bfa8-host\") pod \"crc-debug-cj9vf\" (UID: \"256c3921-d28a-45ea-9cd4-48440a58bfa8\") " pod="openshift-must-gather-l49pj/crc-debug-cj9vf" Oct 08 17:34:43 crc kubenswrapper[4624]: I1008 17:34:43.282574 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qblg\" (UniqueName: \"kubernetes.io/projected/256c3921-d28a-45ea-9cd4-48440a58bfa8-kube-api-access-5qblg\") pod \"crc-debug-cj9vf\" (UID: \"256c3921-d28a-45ea-9cd4-48440a58bfa8\") " pod="openshift-must-gather-l49pj/crc-debug-cj9vf" Oct 08 17:34:43 crc kubenswrapper[4624]: I1008 17:34:43.299820 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l49pj/crc-debug-cj9vf" Oct 08 17:34:43 crc kubenswrapper[4624]: W1008 17:34:43.343830 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod256c3921_d28a_45ea_9cd4_48440a58bfa8.slice/crio-60403eef0d783f241e03df76a2dc07fa55aa0802621903367fafcb2da2768340 WatchSource:0}: Error finding container 60403eef0d783f241e03df76a2dc07fa55aa0802621903367fafcb2da2768340: Status 404 returned error can't find the container with id 60403eef0d783f241e03df76a2dc07fa55aa0802621903367fafcb2da2768340 Oct 08 17:34:43 crc kubenswrapper[4624]: I1008 17:34:43.521728 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="076e4ccc-ffeb-4578-accb-47c3efee8b71" path="/var/lib/kubelet/pods/076e4ccc-ffeb-4578-accb-47c3efee8b71/volumes" Oct 08 17:34:43 crc kubenswrapper[4624]: I1008 17:34:43.578337 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l49pj/crc-debug-cj9vf" event={"ID":"256c3921-d28a-45ea-9cd4-48440a58bfa8","Type":"ContainerStarted","Data":"60403eef0d783f241e03df76a2dc07fa55aa0802621903367fafcb2da2768340"} Oct 08 17:34:44 crc kubenswrapper[4624]: I1008 17:34:44.594423 4624 generic.go:334] "Generic (PLEG): container finished" podID="256c3921-d28a-45ea-9cd4-48440a58bfa8" containerID="d20ddb0eebeab6ec4bfae81a950e97234aa53d2823549f6152309ec89b10cd4e" exitCode=0 Oct 08 17:34:44 crc kubenswrapper[4624]: I1008 17:34:44.594542 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l49pj/crc-debug-cj9vf" event={"ID":"256c3921-d28a-45ea-9cd4-48440a58bfa8","Type":"ContainerDied","Data":"d20ddb0eebeab6ec4bfae81a950e97234aa53d2823549f6152309ec89b10cd4e"} Oct 08 17:34:44 crc kubenswrapper[4624]: I1008 17:34:44.648084 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l49pj/crc-debug-cj9vf"] Oct 08 17:34:44 crc kubenswrapper[4624]: I1008 17:34:44.665287 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l49pj/crc-debug-cj9vf"] Oct 08 17:34:45 crc kubenswrapper[4624]: I1008 17:34:45.748685 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l49pj/crc-debug-cj9vf" Oct 08 17:34:45 crc kubenswrapper[4624]: I1008 17:34:45.920839 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qblg\" (UniqueName: \"kubernetes.io/projected/256c3921-d28a-45ea-9cd4-48440a58bfa8-kube-api-access-5qblg\") pod \"256c3921-d28a-45ea-9cd4-48440a58bfa8\" (UID: \"256c3921-d28a-45ea-9cd4-48440a58bfa8\") " Oct 08 17:34:45 crc kubenswrapper[4624]: I1008 17:34:45.921097 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/256c3921-d28a-45ea-9cd4-48440a58bfa8-host\") pod \"256c3921-d28a-45ea-9cd4-48440a58bfa8\" (UID: \"256c3921-d28a-45ea-9cd4-48440a58bfa8\") " Oct 08 17:34:45 crc kubenswrapper[4624]: I1008 17:34:45.921240 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/256c3921-d28a-45ea-9cd4-48440a58bfa8-host" (OuterVolumeSpecName: "host") pod "256c3921-d28a-45ea-9cd4-48440a58bfa8" (UID: "256c3921-d28a-45ea-9cd4-48440a58bfa8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 17:34:45 crc kubenswrapper[4624]: I1008 17:34:45.921714 4624 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/256c3921-d28a-45ea-9cd4-48440a58bfa8-host\") on node \"crc\" DevicePath \"\"" Oct 08 17:34:45 crc kubenswrapper[4624]: I1008 17:34:45.934012 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/256c3921-d28a-45ea-9cd4-48440a58bfa8-kube-api-access-5qblg" (OuterVolumeSpecName: "kube-api-access-5qblg") pod "256c3921-d28a-45ea-9cd4-48440a58bfa8" (UID: "256c3921-d28a-45ea-9cd4-48440a58bfa8"). InnerVolumeSpecName "kube-api-access-5qblg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:34:46 crc kubenswrapper[4624]: I1008 17:34:46.023340 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qblg\" (UniqueName: \"kubernetes.io/projected/256c3921-d28a-45ea-9cd4-48440a58bfa8-kube-api-access-5qblg\") on node \"crc\" DevicePath \"\"" Oct 08 17:34:46 crc kubenswrapper[4624]: I1008 17:34:46.625656 4624 scope.go:117] "RemoveContainer" containerID="d20ddb0eebeab6ec4bfae81a950e97234aa53d2823549f6152309ec89b10cd4e" Oct 08 17:34:46 crc kubenswrapper[4624]: I1008 17:34:46.625741 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l49pj/crc-debug-cj9vf" Oct 08 17:34:46 crc kubenswrapper[4624]: I1008 17:34:46.934193 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj_6a24c625-ff51-434c-9db2-3677640cbdef/util/0.log" Oct 08 17:34:47 crc kubenswrapper[4624]: I1008 17:34:47.136184 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj_6a24c625-ff51-434c-9db2-3677640cbdef/util/0.log" Oct 08 17:34:47 crc kubenswrapper[4624]: I1008 17:34:47.170626 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj_6a24c625-ff51-434c-9db2-3677640cbdef/pull/0.log" Oct 08 17:34:47 crc kubenswrapper[4624]: I1008 17:34:47.210678 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj_6a24c625-ff51-434c-9db2-3677640cbdef/pull/0.log" Oct 08 17:34:47 crc kubenswrapper[4624]: I1008 17:34:47.415346 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj_6a24c625-ff51-434c-9db2-3677640cbdef/extract/0.log" Oct 08 17:34:47 crc kubenswrapper[4624]: I1008 17:34:47.457243 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj_6a24c625-ff51-434c-9db2-3677640cbdef/pull/0.log" Oct 08 17:34:47 crc kubenswrapper[4624]: I1008 17:34:47.466344 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:34:47 crc kubenswrapper[4624]: E1008 17:34:47.466685 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:34:47 crc kubenswrapper[4624]: I1008 17:34:47.481406 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="256c3921-d28a-45ea-9cd4-48440a58bfa8" path="/var/lib/kubelet/pods/256c3921-d28a-45ea-9cd4-48440a58bfa8/volumes" Oct 08 17:34:47 crc kubenswrapper[4624]: I1008 17:34:47.516814 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_515c63aacd68ba9c26723f7b91d07b0387b4feae6b66720d5334dd0aa95bmxj_6a24c625-ff51-434c-9db2-3677640cbdef/util/0.log" Oct 08 17:34:47 crc kubenswrapper[4624]: I1008 17:34:47.697159 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-qncx7_afff9e1e-6c7c-42b8-8099-6817f813ddb5/kube-rbac-proxy/0.log" Oct 08 17:34:47 crc kubenswrapper[4624]: I1008 17:34:47.770245 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-qncx7_afff9e1e-6c7c-42b8-8099-6817f813ddb5/manager/0.log" Oct 08 17:34:47 crc kubenswrapper[4624]: I1008 17:34:47.811751 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-rdwmx_7e4bdb15-7f2c-4a03-8882-00312974ef50/kube-rbac-proxy/0.log" Oct 08 17:34:48 crc kubenswrapper[4624]: I1008 17:34:48.050193 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-rdwmx_7e4bdb15-7f2c-4a03-8882-00312974ef50/manager/0.log" Oct 08 17:34:48 crc kubenswrapper[4624]: I1008 17:34:48.094520 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-ntt7z_c1f040ec-db8e-42de-8d6c-7758f2f45ecc/kube-rbac-proxy/0.log" Oct 08 17:34:48 crc kubenswrapper[4624]: I1008 17:34:48.193344 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-ntt7z_c1f040ec-db8e-42de-8d6c-7758f2f45ecc/manager/0.log" Oct 08 17:34:48 crc kubenswrapper[4624]: I1008 17:34:48.345143 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-bxlpf_d14acd1f-d497-4c71-8e2a-24c991118c01/kube-rbac-proxy/0.log" Oct 08 17:34:48 crc kubenswrapper[4624]: I1008 17:34:48.461624 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-bxlpf_d14acd1f-d497-4c71-8e2a-24c991118c01/manager/0.log" Oct 08 17:34:48 crc kubenswrapper[4624]: I1008 17:34:48.564948 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-zkxcm_0a8ab8f3-13b9-4f95-b540-ea49d2c5a261/kube-rbac-proxy/0.log" Oct 08 17:34:48 crc kubenswrapper[4624]: I1008 17:34:48.694096 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-zkxcm_0a8ab8f3-13b9-4f95-b540-ea49d2c5a261/manager/0.log" Oct 08 17:34:48 crc kubenswrapper[4624]: I1008 17:34:48.756521 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-5nntt_9dbf7483-c352-4fd8-b0e0-96acf41616b0/kube-rbac-proxy/0.log" Oct 08 17:34:48 crc kubenswrapper[4624]: I1008 17:34:48.892887 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-5nntt_9dbf7483-c352-4fd8-b0e0-96acf41616b0/manager/0.log" Oct 08 17:34:48 crc kubenswrapper[4624]: I1008 17:34:48.990048 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-bhvnd_6b00e056-2cc0-4eb2-85f9-8fc7197dc67a/kube-rbac-proxy/0.log" Oct 08 17:34:49 crc kubenswrapper[4624]: I1008 17:34:49.207848 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-bhvnd_6b00e056-2cc0-4eb2-85f9-8fc7197dc67a/manager/0.log" Oct 08 17:34:49 crc kubenswrapper[4624]: I1008 17:34:49.232247 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-fnwtr_d957903e-a551-41a6-8360-9af30306414f/manager/0.log" Oct 08 17:34:49 crc kubenswrapper[4624]: I1008 17:34:49.321812 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-fnwtr_d957903e-a551-41a6-8360-9af30306414f/kube-rbac-proxy/0.log" Oct 08 17:34:49 crc kubenswrapper[4624]: I1008 17:34:49.443906 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-zljbd_afba2503-8832-4a9f-8246-390f7ae79b71/kube-rbac-proxy/0.log" Oct 08 17:34:49 crc kubenswrapper[4624]: I1008 17:34:49.624207 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-zljbd_afba2503-8832-4a9f-8246-390f7ae79b71/manager/0.log" Oct 08 17:34:49 crc kubenswrapper[4624]: I1008 17:34:49.693066 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-q7qf9_c23e992e-ee8f-4d3a-9a3f-9f35a77f60fe/kube-rbac-proxy/0.log" Oct 08 17:34:49 crc kubenswrapper[4624]: I1008 17:34:49.988910 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-q7qf9_c23e992e-ee8f-4d3a-9a3f-9f35a77f60fe/manager/0.log" Oct 08 17:34:50 crc kubenswrapper[4624]: I1008 17:34:50.115747 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-fr9v7_5e88daf4-d403-4ba8-827f-a9972c5e40bf/kube-rbac-proxy/0.log" Oct 08 17:34:50 crc kubenswrapper[4624]: I1008 17:34:50.204508 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-fr9v7_5e88daf4-d403-4ba8-827f-a9972c5e40bf/manager/0.log" Oct 08 17:34:50 crc kubenswrapper[4624]: I1008 17:34:50.313816 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-tg58g_25e6b130-d820-475c-aae6-bed0dfbd0d0f/kube-rbac-proxy/0.log" Oct 08 17:34:50 crc kubenswrapper[4624]: I1008 17:34:50.458479 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-tg58g_25e6b130-d820-475c-aae6-bed0dfbd0d0f/manager/0.log" Oct 08 17:34:50 crc kubenswrapper[4624]: I1008 17:34:50.572185 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-8zzlr_84583328-8cef-49aa-b812-6f550d1dd71f/kube-rbac-proxy/0.log" Oct 08 17:34:50 crc kubenswrapper[4624]: I1008 17:34:50.667449 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-8zzlr_84583328-8cef-49aa-b812-6f550d1dd71f/manager/0.log" Oct 08 17:34:50 crc kubenswrapper[4624]: I1008 17:34:50.732375 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-hm59s_46bfb8aa-1ae5-43b4-88e9-5175655832aa/kube-rbac-proxy/0.log" Oct 08 17:34:50 crc kubenswrapper[4624]: I1008 17:34:50.860754 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-hm59s_46bfb8aa-1ae5-43b4-88e9-5175655832aa/manager/0.log" Oct 08 17:34:50 crc kubenswrapper[4624]: I1008 17:34:50.997077 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd_f4fff91a-a1f8-4e66-9955-005bfa78dfe6/kube-rbac-proxy/0.log" Oct 08 17:34:51 crc kubenswrapper[4624]: I1008 17:34:51.010581 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757dpvwkd_f4fff91a-a1f8-4e66-9955-005bfa78dfe6/manager/0.log" Oct 08 17:34:51 crc kubenswrapper[4624]: I1008 17:34:51.180098 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7984bdc97c-5nw99_3b136221-5f6b-451f-807a-5b66f856daa4/kube-rbac-proxy/0.log" Oct 08 17:34:51 crc kubenswrapper[4624]: I1008 17:34:51.400071 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-74fc58d4cc-drvtb_d7a5f232-4275-4277-8ec9-112eaadf6f4d/kube-rbac-proxy/0.log" Oct 08 17:34:51 crc kubenswrapper[4624]: I1008 17:34:51.746176 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-74fc58d4cc-drvtb_d7a5f232-4275-4277-8ec9-112eaadf6f4d/operator/0.log" Oct 08 17:34:51 crc kubenswrapper[4624]: I1008 17:34:51.767201 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-9vmx2_7c422ddf-650f-4f8d-828e-12a834b70bab/registry-server/0.log" Oct 08 17:34:52 crc kubenswrapper[4624]: I1008 17:34:52.059766 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f96f8c84-s6lmj_657a98dc-f421-4483-9354-28eeb59bb8a0/kube-rbac-proxy/0.log" Oct 08 17:34:52 crc kubenswrapper[4624]: I1008 17:34:52.060802 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-5bwth_2e68c8fe-365a-4d66-bbbd-0cac98993f72/kube-rbac-proxy/0.log" Oct 08 17:34:52 crc kubenswrapper[4624]: I1008 17:34:52.268342 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f96f8c84-s6lmj_657a98dc-f421-4483-9354-28eeb59bb8a0/manager/0.log" Oct 08 17:34:52 crc kubenswrapper[4624]: I1008 17:34:52.310213 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-5bwth_2e68c8fe-365a-4d66-bbbd-0cac98993f72/manager/0.log" Oct 08 17:34:52 crc kubenswrapper[4624]: I1008 17:34:52.630354 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-dc5gv_5b5f14f4-2722-46d2-9aa4-958caf004e89/operator/0.log" Oct 08 17:34:52 crc kubenswrapper[4624]: I1008 17:34:52.757625 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-vzbcj_4fed85c8-e5c1-40db-9799-64b1705f9d86/kube-rbac-proxy/0.log" Oct 08 17:34:52 crc kubenswrapper[4624]: I1008 17:34:52.841578 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-vzbcj_4fed85c8-e5c1-40db-9799-64b1705f9d86/manager/0.log" Oct 08 17:34:52 crc kubenswrapper[4624]: I1008 17:34:52.993343 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-775776c574-m7brx_2f4ed0ab-4cb6-4874-8c0d-cce4fdbfa7c9/kube-rbac-proxy/0.log" Oct 08 17:34:52 crc kubenswrapper[4624]: I1008 17:34:52.997515 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7984bdc97c-5nw99_3b136221-5f6b-451f-807a-5b66f856daa4/manager/0.log" Oct 08 17:34:53 crc kubenswrapper[4624]: I1008 17:34:53.066489 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-775776c574-m7brx_2f4ed0ab-4cb6-4874-8c0d-cce4fdbfa7c9/manager/0.log" Oct 08 17:34:53 crc kubenswrapper[4624]: I1008 17:34:53.172842 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-74665f6cdc-wklj4_de7ff9ef-39f5-4521-856d-28c2665e7893/kube-rbac-proxy/0.log" Oct 08 17:34:53 crc kubenswrapper[4624]: I1008 17:34:53.233027 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-74665f6cdc-wklj4_de7ff9ef-39f5-4521-856d-28c2665e7893/manager/0.log" Oct 08 17:34:53 crc kubenswrapper[4624]: I1008 17:34:53.290754 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5dd4499c96-l4cbt_60b85252-6e34-43c5-a048-52fe105f2f93/kube-rbac-proxy/0.log" Oct 08 17:34:53 crc kubenswrapper[4624]: I1008 17:34:53.379280 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5dd4499c96-l4cbt_60b85252-6e34-43c5-a048-52fe105f2f93/manager/0.log" Oct 08 17:35:01 crc kubenswrapper[4624]: I1008 17:35:01.468369 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:35:01 crc kubenswrapper[4624]: E1008 17:35:01.469207 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:35:10 crc kubenswrapper[4624]: I1008 17:35:10.308888 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hrd59_d3a80e27-d7fd-4b62-b5ae-9719c4f69655/control-plane-machine-set-operator/0.log" Oct 08 17:35:10 crc kubenswrapper[4624]: I1008 17:35:10.456973 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bbffn_100b758b-a285-49a0-a5ec-0b565dce5e1a/kube-rbac-proxy/0.log" Oct 08 17:35:10 crc kubenswrapper[4624]: I1008 17:35:10.540828 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bbffn_100b758b-a285-49a0-a5ec-0b565dce5e1a/machine-api-operator/0.log" Oct 08 17:35:13 crc kubenswrapper[4624]: I1008 17:35:13.466149 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:35:13 crc kubenswrapper[4624]: E1008 17:35:13.467720 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:35:24 crc kubenswrapper[4624]: I1008 17:35:24.119559 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-nh2ht_1f6ac2d9-a161-44b3-9e1d-8ba6eca7fa85/cert-manager-controller/0.log" Oct 08 17:35:24 crc kubenswrapper[4624]: I1008 17:35:24.272586 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-6b7n8_733c04ac-a5c1-44e6-8314-800c327491f9/cert-manager-cainjector/0.log" Oct 08 17:35:24 crc kubenswrapper[4624]: I1008 17:35:24.505507 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-pxvg9_973fdfa2-38af-4052-9fe8-d9657c1be807/cert-manager-webhook/0.log" Oct 08 17:35:26 crc kubenswrapper[4624]: I1008 17:35:26.465671 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:35:26 crc kubenswrapper[4624]: E1008 17:35:26.466559 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zfrv8_openshift-machine-config-operator(8a106d69-d531-4ee4-a9ed-505988ebd24d)\"" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" Oct 08 17:35:39 crc kubenswrapper[4624]: I1008 17:35:39.229855 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-tb675_cc684d95-c43f-4d51-abca-fe8d1719d548/nmstate-console-plugin/0.log" Oct 08 17:35:39 crc kubenswrapper[4624]: I1008 17:35:39.473070 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qfgz8_e42689fb-9ba3-4ab8-8e3d-8ca52a091c49/nmstate-handler/0.log" Oct 08 17:35:39 crc kubenswrapper[4624]: I1008 17:35:39.517662 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-8bkj8_96e0b6b6-38d2-491f-ae9a-5be4860d1da0/kube-rbac-proxy/0.log" Oct 08 17:35:39 crc kubenswrapper[4624]: I1008 17:35:39.630727 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-8bkj8_96e0b6b6-38d2-491f-ae9a-5be4860d1da0/nmstate-metrics/0.log" Oct 08 17:35:39 crc kubenswrapper[4624]: I1008 17:35:39.794217 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-626pz_caa42ed9-a0c0-4e5b-836a-9bfcd09c439e/nmstate-operator/0.log" Oct 08 17:35:39 crc kubenswrapper[4624]: I1008 17:35:39.912400 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-jbwmj_d82b9420-16a1-4ddf-9239-7b649b9429d2/nmstate-webhook/0.log" Oct 08 17:35:40 crc kubenswrapper[4624]: I1008 17:35:40.465981 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:35:41 crc kubenswrapper[4624]: I1008 17:35:41.142602 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"a1e925ebcac7e612cb0a08ded8f75df736b1d2531b29d752aeccd8f120274c36"} Oct 08 17:35:58 crc kubenswrapper[4624]: I1008 17:35:58.041318 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-rm4d2_43a3eaca-a2c8-4508-996d-b6c977997ea7/kube-rbac-proxy/0.log" Oct 08 17:35:58 crc kubenswrapper[4624]: I1008 17:35:58.076111 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-rm4d2_43a3eaca-a2c8-4508-996d-b6c977997ea7/controller/0.log" Oct 08 17:35:58 crc kubenswrapper[4624]: I1008 17:35:58.573407 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-frr-files/0.log" Oct 08 17:35:58 crc kubenswrapper[4624]: I1008 17:35:58.794539 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-frr-files/0.log" Oct 08 17:35:58 crc kubenswrapper[4624]: I1008 17:35:58.828110 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-reloader/0.log" Oct 08 17:35:58 crc kubenswrapper[4624]: I1008 17:35:58.843902 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-metrics/0.log" Oct 08 17:35:58 crc kubenswrapper[4624]: I1008 17:35:58.903136 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-reloader/0.log" Oct 08 17:35:59 crc kubenswrapper[4624]: I1008 17:35:59.250441 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-metrics/0.log" Oct 08 17:35:59 crc kubenswrapper[4624]: I1008 17:35:59.252083 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-reloader/0.log" Oct 08 17:35:59 crc kubenswrapper[4624]: I1008 17:35:59.252951 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-metrics/0.log" Oct 08 17:35:59 crc kubenswrapper[4624]: I1008 17:35:59.275256 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-frr-files/0.log" Oct 08 17:35:59 crc kubenswrapper[4624]: I1008 17:35:59.543896 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-metrics/0.log" Oct 08 17:35:59 crc kubenswrapper[4624]: I1008 17:35:59.590167 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/controller/0.log" Oct 08 17:35:59 crc kubenswrapper[4624]: I1008 17:35:59.595715 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-frr-files/0.log" Oct 08 17:35:59 crc kubenswrapper[4624]: I1008 17:35:59.606052 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/cp-reloader/0.log" Oct 08 17:35:59 crc kubenswrapper[4624]: I1008 17:35:59.766334 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/frr-metrics/0.log" Oct 08 17:35:59 crc kubenswrapper[4624]: I1008 17:35:59.787574 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/kube-rbac-proxy/0.log" Oct 08 17:35:59 crc kubenswrapper[4624]: I1008 17:35:59.950028 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/kube-rbac-proxy-frr/0.log" Oct 08 17:36:00 crc kubenswrapper[4624]: I1008 17:36:00.103118 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/reloader/0.log" Oct 08 17:36:00 crc kubenswrapper[4624]: I1008 17:36:00.210082 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-bf8cm_e36921e5-1d1e-420d-9dc1-f6aaad1bf904/frr-k8s-webhook-server/0.log" Oct 08 17:36:00 crc kubenswrapper[4624]: I1008 17:36:00.604218 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-bb6fb947b-d4j7c_38a79520-0ffe-4c51-8cba-117b74d7e7c8/manager/0.log" Oct 08 17:36:00 crc kubenswrapper[4624]: I1008 17:36:00.852081 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7df686497b-8bpg5_b8dd1cb9-7393-4033-abac-22f7aafa0235/webhook-server/0.log" Oct 08 17:36:01 crc kubenswrapper[4624]: I1008 17:36:01.147772 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vs5mh_25542429-7dd5-4d22-a273-38386ed868ac/kube-rbac-proxy/0.log" Oct 08 17:36:01 crc kubenswrapper[4624]: I1008 17:36:01.930227 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vs5mh_25542429-7dd5-4d22-a273-38386ed868ac/speaker/0.log" Oct 08 17:36:02 crc kubenswrapper[4624]: I1008 17:36:02.401556 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j8nx2_34800d6c-2287-427a-93c1-b227a3e4734d/frr/0.log" Oct 08 17:36:15 crc kubenswrapper[4624]: I1008 17:36:15.592309 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs_d35debc8-9835-42bc-833e-7412681e9a4d/util/0.log" Oct 08 17:36:15 crc kubenswrapper[4624]: I1008 17:36:15.840567 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs_d35debc8-9835-42bc-833e-7412681e9a4d/pull/0.log" Oct 08 17:36:15 crc kubenswrapper[4624]: I1008 17:36:15.869992 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs_d35debc8-9835-42bc-833e-7412681e9a4d/util/0.log" Oct 08 17:36:15 crc kubenswrapper[4624]: I1008 17:36:15.911348 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs_d35debc8-9835-42bc-833e-7412681e9a4d/pull/0.log" Oct 08 17:36:16 crc kubenswrapper[4624]: I1008 17:36:16.041939 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs_d35debc8-9835-42bc-833e-7412681e9a4d/util/0.log" Oct 08 17:36:16 crc kubenswrapper[4624]: I1008 17:36:16.111309 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs_d35debc8-9835-42bc-833e-7412681e9a4d/extract/0.log" Oct 08 17:36:16 crc kubenswrapper[4624]: I1008 17:36:16.146820 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bq8xs_d35debc8-9835-42bc-833e-7412681e9a4d/pull/0.log" Oct 08 17:36:16 crc kubenswrapper[4624]: I1008 17:36:16.277517 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5699q_cee06cd4-6d0b-4f4c-8734-b929607ec920/extract-utilities/0.log" Oct 08 17:36:16 crc kubenswrapper[4624]: I1008 17:36:16.480805 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5699q_cee06cd4-6d0b-4f4c-8734-b929607ec920/extract-utilities/0.log" Oct 08 17:36:16 crc kubenswrapper[4624]: I1008 17:36:16.496036 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5699q_cee06cd4-6d0b-4f4c-8734-b929607ec920/extract-content/0.log" Oct 08 17:36:16 crc kubenswrapper[4624]: I1008 17:36:16.531105 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5699q_cee06cd4-6d0b-4f4c-8734-b929607ec920/extract-content/0.log" Oct 08 17:36:16 crc kubenswrapper[4624]: I1008 17:36:16.721125 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5699q_cee06cd4-6d0b-4f4c-8734-b929607ec920/extract-content/0.log" Oct 08 17:36:16 crc kubenswrapper[4624]: I1008 17:36:16.736576 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5699q_cee06cd4-6d0b-4f4c-8734-b929607ec920/extract-utilities/0.log" Oct 08 17:36:16 crc kubenswrapper[4624]: I1008 17:36:16.884954 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5699q_cee06cd4-6d0b-4f4c-8734-b929607ec920/registry-server/0.log" Oct 08 17:36:16 crc kubenswrapper[4624]: I1008 17:36:16.963443 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp9b_d746ff3a-6adf-490a-a8fc-a2cbf477ff25/extract-utilities/0.log" Oct 08 17:36:17 crc kubenswrapper[4624]: I1008 17:36:17.202645 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp9b_d746ff3a-6adf-490a-a8fc-a2cbf477ff25/extract-utilities/0.log" Oct 08 17:36:17 crc kubenswrapper[4624]: I1008 17:36:17.247744 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp9b_d746ff3a-6adf-490a-a8fc-a2cbf477ff25/extract-content/0.log" Oct 08 17:36:17 crc kubenswrapper[4624]: I1008 17:36:17.259501 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp9b_d746ff3a-6adf-490a-a8fc-a2cbf477ff25/extract-content/0.log" Oct 08 17:36:17 crc kubenswrapper[4624]: I1008 17:36:17.514469 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp9b_d746ff3a-6adf-490a-a8fc-a2cbf477ff25/extract-utilities/0.log" Oct 08 17:36:17 crc kubenswrapper[4624]: I1008 17:36:17.583075 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp9b_d746ff3a-6adf-490a-a8fc-a2cbf477ff25/extract-content/0.log" Oct 08 17:36:17 crc kubenswrapper[4624]: I1008 17:36:17.786222 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb_7412e80b-7f98-4f46-8223-2389077e9175/util/0.log" Oct 08 17:36:18 crc kubenswrapper[4624]: I1008 17:36:18.212693 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb_7412e80b-7f98-4f46-8223-2389077e9175/pull/0.log" Oct 08 17:36:18 crc kubenswrapper[4624]: I1008 17:36:18.329569 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb_7412e80b-7f98-4f46-8223-2389077e9175/pull/0.log" Oct 08 17:36:18 crc kubenswrapper[4624]: I1008 17:36:18.374062 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb_7412e80b-7f98-4f46-8223-2389077e9175/util/0.log" Oct 08 17:36:18 crc kubenswrapper[4624]: I1008 17:36:18.604830 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb_7412e80b-7f98-4f46-8223-2389077e9175/util/0.log" Oct 08 17:36:18 crc kubenswrapper[4624]: I1008 17:36:18.656524 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb_7412e80b-7f98-4f46-8223-2389077e9175/extract/0.log" Oct 08 17:36:18 crc kubenswrapper[4624]: I1008 17:36:18.725434 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxx9sb_7412e80b-7f98-4f46-8223-2389077e9175/pull/0.log" Oct 08 17:36:18 crc kubenswrapper[4624]: I1008 17:36:18.783346 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp9b_d746ff3a-6adf-490a-a8fc-a2cbf477ff25/registry-server/0.log" Oct 08 17:36:18 crc kubenswrapper[4624]: I1008 17:36:18.956134 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cwbr6_348b94bb-ae57-4f52-8592-53abc49b97d0/marketplace-operator/0.log" Oct 08 17:36:19 crc kubenswrapper[4624]: I1008 17:36:19.040442 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ld57t_4b07dc5b-db1a-4f6b-be6c-660723b543b3/extract-utilities/0.log" Oct 08 17:36:19 crc kubenswrapper[4624]: I1008 17:36:19.263151 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ld57t_4b07dc5b-db1a-4f6b-be6c-660723b543b3/extract-utilities/0.log" Oct 08 17:36:19 crc kubenswrapper[4624]: I1008 17:36:19.291829 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ld57t_4b07dc5b-db1a-4f6b-be6c-660723b543b3/extract-content/0.log" Oct 08 17:36:19 crc kubenswrapper[4624]: I1008 17:36:19.368952 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ld57t_4b07dc5b-db1a-4f6b-be6c-660723b543b3/extract-content/0.log" Oct 08 17:36:19 crc kubenswrapper[4624]: I1008 17:36:19.517254 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ld57t_4b07dc5b-db1a-4f6b-be6c-660723b543b3/extract-utilities/0.log" Oct 08 17:36:19 crc kubenswrapper[4624]: I1008 17:36:19.540242 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ld57t_4b07dc5b-db1a-4f6b-be6c-660723b543b3/extract-content/0.log" Oct 08 17:36:19 crc kubenswrapper[4624]: I1008 17:36:19.787990 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7zcs_8c4b2af3-b2f0-4f10-9b2b-81a83483be29/extract-utilities/0.log" Oct 08 17:36:19 crc kubenswrapper[4624]: I1008 17:36:19.980774 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ld57t_4b07dc5b-db1a-4f6b-be6c-660723b543b3/registry-server/0.log" Oct 08 17:36:20 crc kubenswrapper[4624]: I1008 17:36:20.037011 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7zcs_8c4b2af3-b2f0-4f10-9b2b-81a83483be29/extract-content/0.log" Oct 08 17:36:20 crc kubenswrapper[4624]: I1008 17:36:20.064612 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7zcs_8c4b2af3-b2f0-4f10-9b2b-81a83483be29/extract-utilities/0.log" Oct 08 17:36:20 crc kubenswrapper[4624]: I1008 17:36:20.104510 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7zcs_8c4b2af3-b2f0-4f10-9b2b-81a83483be29/extract-content/0.log" Oct 08 17:36:20 crc kubenswrapper[4624]: I1008 17:36:20.261161 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7zcs_8c4b2af3-b2f0-4f10-9b2b-81a83483be29/extract-utilities/0.log" Oct 08 17:36:20 crc kubenswrapper[4624]: I1008 17:36:20.319642 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7zcs_8c4b2af3-b2f0-4f10-9b2b-81a83483be29/extract-content/0.log" Oct 08 17:36:21 crc kubenswrapper[4624]: I1008 17:36:21.281339 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5ptb8"] Oct 08 17:36:21 crc kubenswrapper[4624]: E1008 17:36:21.310192 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="256c3921-d28a-45ea-9cd4-48440a58bfa8" containerName="container-00" Oct 08 17:36:21 crc kubenswrapper[4624]: I1008 17:36:21.310253 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="256c3921-d28a-45ea-9cd4-48440a58bfa8" containerName="container-00" Oct 08 17:36:21 crc kubenswrapper[4624]: I1008 17:36:21.312625 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="256c3921-d28a-45ea-9cd4-48440a58bfa8" containerName="container-00" Oct 08 17:36:21 crc kubenswrapper[4624]: I1008 17:36:21.323476 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5ptb8"] Oct 08 17:36:21 crc kubenswrapper[4624]: I1008 17:36:21.325051 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5ptb8" Oct 08 17:36:21 crc kubenswrapper[4624]: I1008 17:36:21.342125 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6188733-5e2b-41f5-a22e-7eebd63c8bde-utilities\") pod \"community-operators-5ptb8\" (UID: \"b6188733-5e2b-41f5-a22e-7eebd63c8bde\") " pod="openshift-marketplace/community-operators-5ptb8" Oct 08 17:36:21 crc kubenswrapper[4624]: I1008 17:36:21.342836 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6188733-5e2b-41f5-a22e-7eebd63c8bde-catalog-content\") pod \"community-operators-5ptb8\" (UID: \"b6188733-5e2b-41f5-a22e-7eebd63c8bde\") " pod="openshift-marketplace/community-operators-5ptb8" Oct 08 17:36:21 crc kubenswrapper[4624]: I1008 17:36:21.343377 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmlfw\" (UniqueName: \"kubernetes.io/projected/b6188733-5e2b-41f5-a22e-7eebd63c8bde-kube-api-access-dmlfw\") pod \"community-operators-5ptb8\" (UID: \"b6188733-5e2b-41f5-a22e-7eebd63c8bde\") " pod="openshift-marketplace/community-operators-5ptb8" Oct 08 17:36:21 crc kubenswrapper[4624]: I1008 17:36:21.444360 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6188733-5e2b-41f5-a22e-7eebd63c8bde-utilities\") pod \"community-operators-5ptb8\" (UID: \"b6188733-5e2b-41f5-a22e-7eebd63c8bde\") " pod="openshift-marketplace/community-operators-5ptb8" Oct 08 17:36:21 crc kubenswrapper[4624]: I1008 17:36:21.444433 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6188733-5e2b-41f5-a22e-7eebd63c8bde-catalog-content\") pod \"community-operators-5ptb8\" (UID: \"b6188733-5e2b-41f5-a22e-7eebd63c8bde\") " pod="openshift-marketplace/community-operators-5ptb8" Oct 08 17:36:21 crc kubenswrapper[4624]: I1008 17:36:21.444496 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmlfw\" (UniqueName: \"kubernetes.io/projected/b6188733-5e2b-41f5-a22e-7eebd63c8bde-kube-api-access-dmlfw\") pod \"community-operators-5ptb8\" (UID: \"b6188733-5e2b-41f5-a22e-7eebd63c8bde\") " pod="openshift-marketplace/community-operators-5ptb8" Oct 08 17:36:21 crc kubenswrapper[4624]: I1008 17:36:21.445161 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6188733-5e2b-41f5-a22e-7eebd63c8bde-utilities\") pod \"community-operators-5ptb8\" (UID: \"b6188733-5e2b-41f5-a22e-7eebd63c8bde\") " pod="openshift-marketplace/community-operators-5ptb8" Oct 08 17:36:21 crc kubenswrapper[4624]: I1008 17:36:21.445373 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6188733-5e2b-41f5-a22e-7eebd63c8bde-catalog-content\") pod \"community-operators-5ptb8\" (UID: \"b6188733-5e2b-41f5-a22e-7eebd63c8bde\") " pod="openshift-marketplace/community-operators-5ptb8" Oct 08 17:36:21 crc kubenswrapper[4624]: I1008 17:36:21.469761 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmlfw\" (UniqueName: \"kubernetes.io/projected/b6188733-5e2b-41f5-a22e-7eebd63c8bde-kube-api-access-dmlfw\") pod \"community-operators-5ptb8\" (UID: \"b6188733-5e2b-41f5-a22e-7eebd63c8bde\") " pod="openshift-marketplace/community-operators-5ptb8" Oct 08 17:36:21 crc kubenswrapper[4624]: I1008 17:36:21.662026 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5ptb8" Oct 08 17:36:21 crc kubenswrapper[4624]: I1008 17:36:21.774458 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7zcs_8c4b2af3-b2f0-4f10-9b2b-81a83483be29/registry-server/0.log" Oct 08 17:36:22 crc kubenswrapper[4624]: I1008 17:36:22.984386 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5ptb8"] Oct 08 17:36:23 crc kubenswrapper[4624]: W1008 17:36:23.014654 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6188733_5e2b_41f5_a22e_7eebd63c8bde.slice/crio-7e927cd5b47643b61305116982c853f2316b24381b4bf398671f93f316846bbd WatchSource:0}: Error finding container 7e927cd5b47643b61305116982c853f2316b24381b4bf398671f93f316846bbd: Status 404 returned error can't find the container with id 7e927cd5b47643b61305116982c853f2316b24381b4bf398671f93f316846bbd Oct 08 17:36:23 crc kubenswrapper[4624]: I1008 17:36:23.561123 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ptb8" event={"ID":"b6188733-5e2b-41f5-a22e-7eebd63c8bde","Type":"ContainerDied","Data":"f8ee6f77b8a72a0428c2edef25b1714a82d2033b63bad0e713dc051b5129942b"} Oct 08 17:36:23 crc kubenswrapper[4624]: I1008 17:36:23.562879 4624 generic.go:334] "Generic (PLEG): container finished" podID="b6188733-5e2b-41f5-a22e-7eebd63c8bde" containerID="f8ee6f77b8a72a0428c2edef25b1714a82d2033b63bad0e713dc051b5129942b" exitCode=0 Oct 08 17:36:23 crc kubenswrapper[4624]: I1008 17:36:23.563260 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ptb8" event={"ID":"b6188733-5e2b-41f5-a22e-7eebd63c8bde","Type":"ContainerStarted","Data":"7e927cd5b47643b61305116982c853f2316b24381b4bf398671f93f316846bbd"} Oct 08 17:36:23 crc kubenswrapper[4624]: I1008 17:36:23.570524 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 17:36:25 crc kubenswrapper[4624]: I1008 17:36:25.584382 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ptb8" event={"ID":"b6188733-5e2b-41f5-a22e-7eebd63c8bde","Type":"ContainerStarted","Data":"5d563485d38079d7614a4d32fcf0fba2fb0db9413e6e22cb1089fc9aceb21a1a"} Oct 08 17:36:27 crc kubenswrapper[4624]: I1008 17:36:27.620583 4624 generic.go:334] "Generic (PLEG): container finished" podID="b6188733-5e2b-41f5-a22e-7eebd63c8bde" containerID="5d563485d38079d7614a4d32fcf0fba2fb0db9413e6e22cb1089fc9aceb21a1a" exitCode=0 Oct 08 17:36:27 crc kubenswrapper[4624]: I1008 17:36:27.620708 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ptb8" event={"ID":"b6188733-5e2b-41f5-a22e-7eebd63c8bde","Type":"ContainerDied","Data":"5d563485d38079d7614a4d32fcf0fba2fb0db9413e6e22cb1089fc9aceb21a1a"} Oct 08 17:36:28 crc kubenswrapper[4624]: I1008 17:36:28.633370 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ptb8" event={"ID":"b6188733-5e2b-41f5-a22e-7eebd63c8bde","Type":"ContainerStarted","Data":"e84db6e83a9f83cc049269c8bc1c428439c3e10f6947ce2a99e209a6bd9f1fcb"} Oct 08 17:36:28 crc kubenswrapper[4624]: I1008 17:36:28.659227 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5ptb8" podStartSLOduration=2.932914525 podStartE2EDuration="7.657285066s" podCreationTimestamp="2025-10-08 17:36:21 +0000 UTC" firstStartedPulling="2025-10-08 17:36:23.562892383 +0000 UTC m=+11608.713827460" lastFinishedPulling="2025-10-08 17:36:28.287262924 +0000 UTC m=+11613.438198001" observedRunningTime="2025-10-08 17:36:28.652294589 +0000 UTC m=+11613.803229666" watchObservedRunningTime="2025-10-08 17:36:28.657285066 +0000 UTC m=+11613.808220143" Oct 08 17:36:31 crc kubenswrapper[4624]: I1008 17:36:31.663344 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5ptb8" Oct 08 17:36:31 crc kubenswrapper[4624]: I1008 17:36:31.663850 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5ptb8" Oct 08 17:36:32 crc kubenswrapper[4624]: I1008 17:36:32.724778 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5ptb8" podUID="b6188733-5e2b-41f5-a22e-7eebd63c8bde" containerName="registry-server" probeResult="failure" output=< Oct 08 17:36:32 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Oct 08 17:36:32 crc kubenswrapper[4624]: > Oct 08 17:36:41 crc kubenswrapper[4624]: I1008 17:36:41.849417 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5ptb8" Oct 08 17:36:41 crc kubenswrapper[4624]: I1008 17:36:41.954142 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5ptb8" Oct 08 17:36:42 crc kubenswrapper[4624]: I1008 17:36:42.100999 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5ptb8"] Oct 08 17:36:43 crc kubenswrapper[4624]: I1008 17:36:43.818654 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5ptb8" podUID="b6188733-5e2b-41f5-a22e-7eebd63c8bde" containerName="registry-server" containerID="cri-o://e84db6e83a9f83cc049269c8bc1c428439c3e10f6947ce2a99e209a6bd9f1fcb" gracePeriod=2 Oct 08 17:36:44 crc kubenswrapper[4624]: I1008 17:36:44.853261 4624 generic.go:334] "Generic (PLEG): container finished" podID="b6188733-5e2b-41f5-a22e-7eebd63c8bde" containerID="e84db6e83a9f83cc049269c8bc1c428439c3e10f6947ce2a99e209a6bd9f1fcb" exitCode=0 Oct 08 17:36:44 crc kubenswrapper[4624]: I1008 17:36:44.853459 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ptb8" event={"ID":"b6188733-5e2b-41f5-a22e-7eebd63c8bde","Type":"ContainerDied","Data":"e84db6e83a9f83cc049269c8bc1c428439c3e10f6947ce2a99e209a6bd9f1fcb"} Oct 08 17:36:44 crc kubenswrapper[4624]: I1008 17:36:44.945158 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5ptb8" Oct 08 17:36:45 crc kubenswrapper[4624]: I1008 17:36:45.057768 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6188733-5e2b-41f5-a22e-7eebd63c8bde-utilities\") pod \"b6188733-5e2b-41f5-a22e-7eebd63c8bde\" (UID: \"b6188733-5e2b-41f5-a22e-7eebd63c8bde\") " Oct 08 17:36:45 crc kubenswrapper[4624]: I1008 17:36:45.058008 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6188733-5e2b-41f5-a22e-7eebd63c8bde-catalog-content\") pod \"b6188733-5e2b-41f5-a22e-7eebd63c8bde\" (UID: \"b6188733-5e2b-41f5-a22e-7eebd63c8bde\") " Oct 08 17:36:45 crc kubenswrapper[4624]: I1008 17:36:45.058038 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmlfw\" (UniqueName: \"kubernetes.io/projected/b6188733-5e2b-41f5-a22e-7eebd63c8bde-kube-api-access-dmlfw\") pod \"b6188733-5e2b-41f5-a22e-7eebd63c8bde\" (UID: \"b6188733-5e2b-41f5-a22e-7eebd63c8bde\") " Oct 08 17:36:45 crc kubenswrapper[4624]: I1008 17:36:45.060493 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6188733-5e2b-41f5-a22e-7eebd63c8bde-utilities" (OuterVolumeSpecName: "utilities") pod "b6188733-5e2b-41f5-a22e-7eebd63c8bde" (UID: "b6188733-5e2b-41f5-a22e-7eebd63c8bde"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:36:45 crc kubenswrapper[4624]: I1008 17:36:45.114908 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6188733-5e2b-41f5-a22e-7eebd63c8bde-kube-api-access-dmlfw" (OuterVolumeSpecName: "kube-api-access-dmlfw") pod "b6188733-5e2b-41f5-a22e-7eebd63c8bde" (UID: "b6188733-5e2b-41f5-a22e-7eebd63c8bde"). InnerVolumeSpecName "kube-api-access-dmlfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:36:45 crc kubenswrapper[4624]: I1008 17:36:45.165717 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6188733-5e2b-41f5-a22e-7eebd63c8bde-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6188733-5e2b-41f5-a22e-7eebd63c8bde" (UID: "b6188733-5e2b-41f5-a22e-7eebd63c8bde"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:36:45 crc kubenswrapper[4624]: I1008 17:36:45.166557 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6188733-5e2b-41f5-a22e-7eebd63c8bde-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 17:36:45 crc kubenswrapper[4624]: I1008 17:36:45.166589 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6188733-5e2b-41f5-a22e-7eebd63c8bde-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 17:36:45 crc kubenswrapper[4624]: I1008 17:36:45.166600 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmlfw\" (UniqueName: \"kubernetes.io/projected/b6188733-5e2b-41f5-a22e-7eebd63c8bde-kube-api-access-dmlfw\") on node \"crc\" DevicePath \"\"" Oct 08 17:36:45 crc kubenswrapper[4624]: I1008 17:36:45.866680 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5ptb8" event={"ID":"b6188733-5e2b-41f5-a22e-7eebd63c8bde","Type":"ContainerDied","Data":"7e927cd5b47643b61305116982c853f2316b24381b4bf398671f93f316846bbd"} Oct 08 17:36:45 crc kubenswrapper[4624]: I1008 17:36:45.868240 4624 scope.go:117] "RemoveContainer" containerID="e84db6e83a9f83cc049269c8bc1c428439c3e10f6947ce2a99e209a6bd9f1fcb" Oct 08 17:36:45 crc kubenswrapper[4624]: I1008 17:36:45.868434 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5ptb8" Oct 08 17:36:45 crc kubenswrapper[4624]: I1008 17:36:45.916409 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5ptb8"] Oct 08 17:36:45 crc kubenswrapper[4624]: I1008 17:36:45.937423 4624 scope.go:117] "RemoveContainer" containerID="5d563485d38079d7614a4d32fcf0fba2fb0db9413e6e22cb1089fc9aceb21a1a" Oct 08 17:36:45 crc kubenswrapper[4624]: I1008 17:36:45.944608 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5ptb8"] Oct 08 17:36:45 crc kubenswrapper[4624]: I1008 17:36:45.964527 4624 scope.go:117] "RemoveContainer" containerID="f8ee6f77b8a72a0428c2edef25b1714a82d2033b63bad0e713dc051b5129942b" Oct 08 17:36:47 crc kubenswrapper[4624]: I1008 17:36:47.482022 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6188733-5e2b-41f5-a22e-7eebd63c8bde" path="/var/lib/kubelet/pods/b6188733-5e2b-41f5-a22e-7eebd63c8bde/volumes" Oct 08 17:37:50 crc kubenswrapper[4624]: I1008 17:37:50.110601 4624 scope.go:117] "RemoveContainer" containerID="9ddb6a9fd38bf8bfc94f0094f00dec9d8011e0f86521f43b0556085d15d118c9" Oct 08 17:38:00 crc kubenswrapper[4624]: I1008 17:38:00.076798 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:38:00 crc kubenswrapper[4624]: I1008 17:38:00.077398 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:38:30 crc kubenswrapper[4624]: I1008 17:38:30.076398 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:38:30 crc kubenswrapper[4624]: I1008 17:38:30.077270 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:39:00 crc kubenswrapper[4624]: I1008 17:39:00.076717 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:39:00 crc kubenswrapper[4624]: I1008 17:39:00.077312 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 17:39:00 crc kubenswrapper[4624]: I1008 17:39:00.077372 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" Oct 08 17:39:00 crc kubenswrapper[4624]: I1008 17:39:00.078317 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1e925ebcac7e612cb0a08ded8f75df736b1d2531b29d752aeccd8f120274c36"} pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 17:39:00 crc kubenswrapper[4624]: I1008 17:39:00.078367 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" containerID="cri-o://a1e925ebcac7e612cb0a08ded8f75df736b1d2531b29d752aeccd8f120274c36" gracePeriod=600 Oct 08 17:39:00 crc kubenswrapper[4624]: I1008 17:39:00.305210 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerID="a1e925ebcac7e612cb0a08ded8f75df736b1d2531b29d752aeccd8f120274c36" exitCode=0 Oct 08 17:39:00 crc kubenswrapper[4624]: I1008 17:39:00.305556 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerDied","Data":"a1e925ebcac7e612cb0a08ded8f75df736b1d2531b29d752aeccd8f120274c36"} Oct 08 17:39:00 crc kubenswrapper[4624]: I1008 17:39:00.305691 4624 scope.go:117] "RemoveContainer" containerID="ebc9ceed8c0b41e96e991ba5b64385e15486f5a9b84e8c8909a0cca1fd658109" Oct 08 17:39:01 crc kubenswrapper[4624]: I1008 17:39:01.316051 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" event={"ID":"8a106d69-d531-4ee4-a9ed-505988ebd24d","Type":"ContainerStarted","Data":"9ff4c2e78277a551262f7b9b3f85b76100529235054f0cf300cc2ed78fc8b539"} Oct 08 17:39:35 crc kubenswrapper[4624]: I1008 17:39:35.714184 4624 generic.go:334] "Generic (PLEG): container finished" podID="77627d4f-bfc2-4c1e-a2cf-f4e4dae418de" containerID="21bb2f7afe03dfdd4fbf87396e474580de2767eccd3c624902c0d072dbcccddb" exitCode=0 Oct 08 17:39:35 crc kubenswrapper[4624]: I1008 17:39:35.714396 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l49pj/must-gather-d5xhc" event={"ID":"77627d4f-bfc2-4c1e-a2cf-f4e4dae418de","Type":"ContainerDied","Data":"21bb2f7afe03dfdd4fbf87396e474580de2767eccd3c624902c0d072dbcccddb"} Oct 08 17:39:35 crc kubenswrapper[4624]: I1008 17:39:35.716020 4624 scope.go:117] "RemoveContainer" containerID="21bb2f7afe03dfdd4fbf87396e474580de2767eccd3c624902c0d072dbcccddb" Oct 08 17:39:35 crc kubenswrapper[4624]: I1008 17:39:35.991904 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l49pj_must-gather-d5xhc_77627d4f-bfc2-4c1e-a2cf-f4e4dae418de/gather/0.log" Oct 08 17:39:55 crc kubenswrapper[4624]: I1008 17:39:55.987338 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l49pj/must-gather-d5xhc"] Oct 08 17:39:55 crc kubenswrapper[4624]: I1008 17:39:55.991374 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-l49pj/must-gather-d5xhc" podUID="77627d4f-bfc2-4c1e-a2cf-f4e4dae418de" containerName="copy" containerID="cri-o://9f03637936e0bc5c01c76d3886fd881662bfe7c67e4ed0a1c70425998548b74f" gracePeriod=2 Oct 08 17:39:55 crc kubenswrapper[4624]: I1008 17:39:55.995477 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l49pj/must-gather-d5xhc"] Oct 08 17:39:56 crc kubenswrapper[4624]: I1008 17:39:56.961931 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l49pj_must-gather-d5xhc_77627d4f-bfc2-4c1e-a2cf-f4e4dae418de/copy/0.log" Oct 08 17:39:56 crc kubenswrapper[4624]: I1008 17:39:56.964194 4624 generic.go:334] "Generic (PLEG): container finished" podID="77627d4f-bfc2-4c1e-a2cf-f4e4dae418de" containerID="9f03637936e0bc5c01c76d3886fd881662bfe7c67e4ed0a1c70425998548b74f" exitCode=143 Oct 08 17:39:57 crc kubenswrapper[4624]: I1008 17:39:57.177322 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l49pj_must-gather-d5xhc_77627d4f-bfc2-4c1e-a2cf-f4e4dae418de/copy/0.log" Oct 08 17:39:57 crc kubenswrapper[4624]: I1008 17:39:57.178614 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l49pj/must-gather-d5xhc" Oct 08 17:39:57 crc kubenswrapper[4624]: I1008 17:39:57.234537 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf84b\" (UniqueName: \"kubernetes.io/projected/77627d4f-bfc2-4c1e-a2cf-f4e4dae418de-kube-api-access-pf84b\") pod \"77627d4f-bfc2-4c1e-a2cf-f4e4dae418de\" (UID: \"77627d4f-bfc2-4c1e-a2cf-f4e4dae418de\") " Oct 08 17:39:57 crc kubenswrapper[4624]: I1008 17:39:57.234785 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/77627d4f-bfc2-4c1e-a2cf-f4e4dae418de-must-gather-output\") pod \"77627d4f-bfc2-4c1e-a2cf-f4e4dae418de\" (UID: \"77627d4f-bfc2-4c1e-a2cf-f4e4dae418de\") " Oct 08 17:39:57 crc kubenswrapper[4624]: I1008 17:39:57.250146 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77627d4f-bfc2-4c1e-a2cf-f4e4dae418de-kube-api-access-pf84b" (OuterVolumeSpecName: "kube-api-access-pf84b") pod "77627d4f-bfc2-4c1e-a2cf-f4e4dae418de" (UID: "77627d4f-bfc2-4c1e-a2cf-f4e4dae418de"). InnerVolumeSpecName "kube-api-access-pf84b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 17:39:57 crc kubenswrapper[4624]: I1008 17:39:57.339017 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf84b\" (UniqueName: \"kubernetes.io/projected/77627d4f-bfc2-4c1e-a2cf-f4e4dae418de-kube-api-access-pf84b\") on node \"crc\" DevicePath \"\"" Oct 08 17:39:57 crc kubenswrapper[4624]: I1008 17:39:57.502810 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77627d4f-bfc2-4c1e-a2cf-f4e4dae418de-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "77627d4f-bfc2-4c1e-a2cf-f4e4dae418de" (UID: "77627d4f-bfc2-4c1e-a2cf-f4e4dae418de"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 17:39:57 crc kubenswrapper[4624]: I1008 17:39:57.542832 4624 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/77627d4f-bfc2-4c1e-a2cf-f4e4dae418de-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 08 17:39:57 crc kubenswrapper[4624]: I1008 17:39:57.974254 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l49pj_must-gather-d5xhc_77627d4f-bfc2-4c1e-a2cf-f4e4dae418de/copy/0.log" Oct 08 17:39:57 crc kubenswrapper[4624]: I1008 17:39:57.974772 4624 scope.go:117] "RemoveContainer" containerID="9f03637936e0bc5c01c76d3886fd881662bfe7c67e4ed0a1c70425998548b74f" Oct 08 17:39:57 crc kubenswrapper[4624]: I1008 17:39:57.974801 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l49pj/must-gather-d5xhc" Oct 08 17:39:58 crc kubenswrapper[4624]: I1008 17:39:58.012132 4624 scope.go:117] "RemoveContainer" containerID="21bb2f7afe03dfdd4fbf87396e474580de2767eccd3c624902c0d072dbcccddb" Oct 08 17:39:58 crc kubenswrapper[4624]: E1008 17:39:58.162146 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77627d4f_bfc2_4c1e_a2cf_f4e4dae418de.slice\": RecentStats: unable to find data in memory cache]" Oct 08 17:39:59 crc kubenswrapper[4624]: I1008 17:39:59.481216 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77627d4f-bfc2-4c1e-a2cf-f4e4dae418de" path="/var/lib/kubelet/pods/77627d4f-bfc2-4c1e-a2cf-f4e4dae418de/volumes" Oct 08 17:40:50 crc kubenswrapper[4624]: I1008 17:40:50.238009 4624 scope.go:117] "RemoveContainer" containerID="c23dd2105fdc31e70095f697493060496b150aba34e4de6887a553563f4b8a0b" Oct 08 17:41:00 crc kubenswrapper[4624]: I1008 17:41:00.077011 4624 patch_prober.go:28] interesting pod/machine-config-daemon-zfrv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 17:41:00 crc kubenswrapper[4624]: I1008 17:41:00.077717 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zfrv8" podUID="8a106d69-d531-4ee4-a9ed-505988ebd24d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"